Trick for better AI predictions, Humane AI pin slammed, Atlas robot: AI Eye – TradingView

author
10 minutes, 15 seconds Read

Everything you need to know about the AI future that’s hurtling fast towards us.

Jump to: Video of the week Atlas Robot, Everybody hates Humanes AI pin, AI makes holocaust victims immortal, Knowledge collapse from mid-curve AIs, Users should beg to pay for AI, Can non-coders create a program with AI? All Killer, No Filler AI News.

Predicting the future with the past

Theres a new prompting technique to get ChatGPT to do what it hates doing the most predict the future.

New research suggests the best way to get accurate predictions from ChatGPT is to prompt it to tell a story set in the future, looking back on events that havent happened yet.

The researchers evaluated 100 different prompts, split between direct predictions (who will win best actor at the 2022 Oscars) versus future narratives, such as asking the chatbot to write a story about a family watching the 2022 Oscars on TV and describe the scene as the presenter reads out the best actor winner.

The story produced more accurate results similarly, the best way to get a good forecast on interest rates was to get the model to produce a story about Fed Chair Jerome Powell looking back on past events. Redditors tried this technique out, and it suggested an interest rate hike in June and a financial crisis in 2030.

Theoretically, that should mean if you ask ChatGPT to write a Cointelegraph news story set in 2025, looking back on this years big Bitcoin price moves, it would return a more accurate price forecast than just asking it for a prediction.

There are two potential issues with the research, though: the researchers chose the 2022 Oscars as they knew who won, but ChatGPT shouldnt, as its training data ran out in September 2021. However, there are plenty of examples of ChatGPT producing information it shouldnt know from the training data.

Another issue is that OpenAI appears to have deliberately borked ChatGPT predictive responses, so this technique might simply be a jailbreak.

Cointelegraph

Related research found the best way to get LLama2 to solve 50 math problems was to convince it was plotting a course for Star Treks spaceship Enterprise through turbulence to find the source of an anomaly.

But this wasnt always reliable. The researchers found the best result for solving 100 math problems was to tell the AI the presidents adviser would be killed if it failed to come up with the right answers.

Video of the week Atlas Robot

Boston Dynamics has unveiled its latest Atlas robot, pulling off some uncanny moves that make it look like the possessed kid in The Exorcist.

Its going to be capable of a set of motions that people arent, CEO Robert Playter told TechCrunch. There will be very practical uses for that.

The latest version of Atlas is slimmed down and all-electric rather than hydraulic. Hyundai will be testing out Atlas as robot workers in its factories early next year.

Everybody hates Humane’s AI pin

Wearable AI devices are one of those things like DePin that attract a lot of hype but are yet to prove their worth. 

The Humane AI pin is a small wearable you pin to your chest and interact with using voice commands. It has a tiny projector that can beam text on your hand. 

Tech reviewer Marques Brownlee called it the worst product Ive ever reviewed, highlighting its frequent wrong or nonsensical answers, bad interface and battery life, and slow results compared to Google.

While Brownless copped a lot of criticism for supposedly single-handedly destroying the devices future, nobody else seems to like it either. 

Wired gave it 4 out of 10, saying its slow, the camera sucks, the projector is impossible to see in daylight and the device overheats. However, it says its good at real-time translation and phone calls.

The Verge says the idea has potential, but the actual device is so thoroughly unfinished and so totally broken in so many unacceptable ways that its not worth buying. 

CointelegraphIt’s not clear why it’s called Rabbit, and reviewers aren’t clear on the advantages of it over a phone.

Another AI wearable called The Rabbit r1 (the first reviews are out in a week) comes with a small screen and hopes to replace a plethora of apps on your phone with an AI assistant. But do we need a dedicated device for that?

As TechRadar’s Rabbit preview of the device concludes:

The voice control interface that does away with apps completely is a good starting point, but again, thats something my Pixel 8 could feasibly do in the future.

To earn their keep, AI hardware is going to need to find a specialized niche similar to how reading a book on a Kindle is a better experience than reading on a phone.

One AI wearable with potential is Limitless, a pendant with 100 hours of battery life that records your conversations so you can query the AI about them later: Did the doctor say to take 15 tablets or 50?” Did Barry say to bring anything for dinner on Saturday night?

While it sounds like a privacy nightmare, the pendant wont start recording until youve got the verbal consent of the other speaker. 

So it seems like there are professional use cases for a device that replaces the need to take notes and is easier than using your phone. Its also fairly affordable.

Cointelegraph

AI makes Holocaust victims immortal

The Sydney Jewish Museum has unveiled a new AI-powered interactive exhibition enabling visitors to ask questions of Holocaust survivors and get answers in real time.

Before death camp survivor Eddie Jaku died aged 101 in October 2021, he spent five days answering more than 1,000 questions about his life and experiences in front of a green screen, captured by a 23-camera rig.

The system transforms visitors questions to Eddie into search terms, cross-matches them with the appropriate answer, and then plays it back, which enables a conversation-like experience. 

With antisemitic conspiracy theories on the rise, it seems like a terrific way to use AI to keep the first-hand testimony of Holocaust survivors alive for coming generations. 

Cointelegraph

Knowledge collapse from mid-curve AIs

Around 10% of Googles search results now point to AI-generated spam content. For years, spammers have been spinning up websites full of garbage articles and content that are optimized for SEO keywords, but generative AI has made the process a million times easier.

Apart from rendering Google search useless, there are concerns that if AI-generated content becomes the majority of content on the web, we could face the potential issue of model collapse,” whereby AIs are trained on garbage AI content, and the quality drops off like a tenth generation photocopy.

Cointelegraph

A related issue called knowledge collapse, affecting humans, was described in a recent research paper from Cornell. Author Andrew J. Peterson wrote that AIs gravitate toward mid-curve ideas in responses and ignore less common, niche or eccentric ideas:

While large language models are trained on vast amounts of diverse data, they naturally generate output towards the ‘center of the distribution.

The diversity of human thought and understanding could grow narrower over time as ideas get homogenized by LLMs.

The paper recommends subsidies to protect the diversity of knowledge, rather in the same way subsidies protect less popular academic and artistic endeavors.

Read also Features

Guide to real-life crypto OGs you’d meet at a party (Part 2)

Features

Real AI & crypto use cases, No. 4: Fight AI fakes with blockchain

The paper recommends subsidies to protect the diversity of knowledge, rather in the same way subsidies protect less popular academic and artistic endeavors.

Highlighting the paper, Google DeepMinds Seb Krier added it was also a strong argument for having innumerable models available to the public and trusting users with more choice and customization.

AI should reflect the rich diversity and weirdness of human experience, not just weird corporate marketing/HR culture.

Users should beg to pay for AI 

Google has been hawking its Gemini 1.5 model to businesses and has been at pains to point out that the safety guardrails and ideology that famously borked its image generation model do not affect corporate customers.

While the controversy over pictures of diverse Nazis saw the consumer version shut down, it turns out the enterprise version wasnt even affected by the issues and was never suspended. 

The issue was not with the base model at all. It was in a specific application that was consumer-facing, Google Cloud CEO Thomas Kurian said.

Cointelegraph

The enterprise model has 19 separate safety controls that companies can set how they like. So if you pay up, you can presumably set the controls anywhere from anti-racist through to alt-right.

This lends weight to Matthew Lynns recent opinion piece in The Telegraph, where he argues that an ad-driven “free” model for AI will be a disaster, just like the ad-driven “free” model for the web has been. Users ended up as the product, spammed with ads at every turn as the services themselves worsened.

There is no point in simply repeating that error all over again. It would be far better if everyone was charged a few pounds every month and the product got steadily better and was not cluttered up with advertising, he wrote.

We should be begging Google and the rest of the AI giants to charge us. We will be far better off in the long run.

Can non-coders create a program with AI?

Author and futurist Daniel Jeffries embarked on an experiment to see if an AI could help him code a complex app. While he sucks at coding, he does have a tech industry background and warns that people with zero coding knowledge are unable to use the tech in its current state. 

Jeffries described the process as mostly drudgery and pain with occasional flashes of holy shit it fucking works. The AI tools created buggy and unwieldy code and demonstrated every single bad programming habit known to man.

However, he did eventually produce a fully functioning program that helped him research competitors websites.

Cointelegraph

He concluded that AI was not going to put coders out of a job.

Anyone who tells you different is selling something. If anything, skilled coders who know how to ask for what they want clearly will be in even more demand.

Replit CEO Amjad Masad made a similar point this week, arguing its actually a great time to learn to code, because youll be able to harness AI tools to create magic.

“Eventually coding will almost entirely be natural language, but you will still be programming. You will be paid for your creativity and ability to get things done with computers not for esoteric knowledge of programming languages.

All Killer, No Filler AI News

Token holders have approved the merger of Fetch.ai, SingularityNET and Ocean Protocol. The new Artificial Superintelligence Alliance looks set to be a top 20 project when the merger happens in May.

Google DeepMind CEO Demis Hassabis will not confirm or deny it is building a $100 billion supercomputer dubbed Stargate but has confirmed it will spend more than $100B on AI in general.

User numbers for Baidus Chinese ChatGPT knockoff Ernie have doubled to 200 million since October.

Researchers at the Center for Countering Digital Hate asked AI image generators to produce election disinformation, and they complied four out of 10 times. Although theyre pushing for stronger safety guardrails, a better watermarking system seems like a better solution.

Read also Features Escape from LA: Why Lockdown in Sri Lanka Works for MyEtherWallet Founder Features The truth behind Cubas Bitcoin revolution: An on-the-ground report

Instagram is looking for influencers to join a new program where their AI-generated avatars can interact with fans. Well soon look back fondly on the old days when fake influencers were still real.

Guardian columnist Alex Hern has a theory on why ChatGPT uses the word “delve” so much that its become a red flag for AI-generated text. He says delve is commonly used in Nigeria, which is where many of the low-cost workers providing reinforcement learning human feedback come from.

OpenAI has released an enhanced version of GPT-4 Turbo, which is available through an API to ChatGPT Plus users. It can solve problems better, is more conversational, and is less of a verbose bullshitter. Its also introduced a 50% discount for batch processing tasks done off peak.

Subscribe The most engaging reads in blockchain. Delivered once a week.

Email address

SUBSCRIBE

Cointelegraph

This post was originally published on this site

Similar Posts