A Winning Pin

One of the biggest stories this week was the début of the Humane AI Pin, a small device that clips on to your jacket or sweater and allows you to do many of the things you can do with your phone while also tapping into the power of AI. 

In an introductory YouTube video for the device, an engineer speaks to Humane AI co-founder Imran Chaudhri in Spanish, and with only a few taps on his pin, Chaudri learns what the engineer is saying and has the pin reply on his behalf. (The AI knows to do this in Spanish without being told.)

One could also communicate with this engineer using a phone and the Google Translate app, but it would be much more awkward and time-consuming. 

In another example from the video, Chaudhri uses the pin’s machine vision capabilities to analyze some almonds he is about to eat and tell him how much protein he will be consuming. If you instruct it to do so, the pin can track what you eat and make sure you are keeping to your dietary plan. 

I was so sure that I would hate this thing. When I watched the first minute of the YouTube video, I kept thinking about what a failure Google’s smart eyeglasses were and how a pair of Google Glass  immediately signaled to the entire world that the person who was wearing them was a giant dork.

I also thought about how many lukewarm to terrible  experiences I’ve had with ChatGPT and Google Bard. The wow factor has definitely dissipated after receiving one too many illogical or inaccurate replies to my queries. Could this AI really be that much better? (I am referring here  in particular to a scenario in the video in which the Humane AI makes restaurant recommendations based on a bunch of notes from Chaudri that it’s read. In the real world, I can easily see an AI getting confused with contradictory or incomplete information.)

And finally, the $700 price point seemed steep given the fact that I have a  very expensive iPhone can do many of these same functions and is a much better camera and videocamera, to boot. (Small point, but I don’t think I would feel comfortable clipping a $700+ device to my sweater. I could easily see it flying off into the street and meeting an untimely end.) 

All of this said,  as I continued to watch the video, I admired the simplicity of the vision, the way AI streamlined so many functions that we currently assign to so many different applications. 

In the final analysis I think the Humane AI Pin will have a tough row to hoe. It’s expensive, somewhat redundant, and requires a change in behavior that may be difficult to surmount. 

Still, the ideas that it has kindled may inspire Apple and Google to speed up the embedding of AI functionality in the smartphone OS, which will be a good thing. 

And if Humane can also carve out a space between these giants, so much the better.

AI & Loyalty

Recent reports have speculated that Open AI’s Sam Altman and former star Apple designer Jony Ive are collaborating on a device that uses AI, possibly a phone. 

As a diehard Apple fan, this news gave me pause. I have always revered the special working relationship that Jony Ive had with Apple, and, more specifically, Steve Jobs. The body of work that they collaborated on – the iMac, iPod, iPhone, iPad, MacBook, and more – is awe-inspiring.  

I realized that I was disappointed that Ive would work with a potentially foundational product with someone other than Jobs and that this reaction was crazy. Ive should have the right to work with whomever he chooses, and indeed, he has a large industrial design company that works with many major brands like Airbnb, Ferrari, and Apple. 

However, Altman’s and Ive’s rumored “AI iPhone” led me to think a little bit about loyalty. Loyalty is something an AI will never feel. It will never feel an obligation to do something based on a prior relationship, an attachment that might make it sacrifice something to help out an ally. In a way, loyalty is irrational: why should someone commit themselves to another person rather than always keeping their options open?

Neither Ive nor Altman are AIs, of course, and both men undoubtedly feel loyal to someone or something, but their new AI device has the potential to deepen our reliance on artificial intelligence and all of the decisions that entails. Will loyalty survive in this age of AI? Will machines see any cause to be loyal or promote loyalty, or will all of the 1s and 0s compel them to look at the world merely as a zero sum game?  

I don’t know enough about Sam Altman to guess where he will land on this, but I believe in Jony Ive. He has earned my loyalty from all of the devices he has designed that have given me so much delight.

But I am nervous, all the same. 

AI Nannies?

If you’re a parent, you have probably agonized about the impact of devices on your kid. 

It sometimes seems like our kids are spending more time with their electronics than their parents. According to Common Sense Media, half of all children under eight own a tablet device and spend an average of about two and a quarter hours per day on a  digital screen. 8- to 12-year-olds spend an average of almost five and a half hours a day looking at screens on smartphones, tablets, gaming consoles, and TVs. Meanwhile, teenagers are spending more than eight and a half hours a day on their devices. 

In an op-ed in today’s Wall Street Journal, however, Dr. Dana Suskind, co-director of the TMW Center for Early Learning + Public Health and the founding director of the Pediatric Cochlear Implant Program at the University of Chicago, suggests that we need to keep an open mind when it comes to the latest high tech innovation for our kids – AI nannies. 

In the not too distant future, Suskind posits, childhood staples such as teddy bears could coo to babies and answer toddlers’ questions, read favorite bedtime stories over and over and over again, sing songs and play games, and even deduce why a baby is crying. 

According to Suskind, this is a good thing: research has shown that rich conversation is vital to a baby’s brain development. In her view, conversing with an AI could stimulate the creation of a wide array of cognitive and emotional connections.

“Why not use AI nannies to engage human babies in the kind of back-and-forth conversation that builds brains?” Suskind writes. “It could have profoundly positive developmental impacts, increasing the frequency and consistency of brain-building moments during the period when children’s brains are most “plastic”— that is, most capable of rewiring themselves based on what they encounter. The technology could be a boon to children who otherwise might experience developmental delays. It could help to unlock cognitive potential and close achievement gaps.”

Suskind acknowledges that there are risks to exposing our kids to so much AI. For example, children and their caregivers sync up when they are trying to communicate or play together, and this neural synchrony greatly accelerates the development of cognitive skills. As far as we know, AIs can’t bond with kids in this way. 

Also, she concedes that this emotional unavailability also makes it hard for AIs to instill the same degree of grit, resiliency, and ambition that children pick up from their parents. 

This reporter wonders what might happen if an AI starts to hallucinate while speaking to a child. He can’t get ChatGPT’s infamous amorous overtures to a New York TImes reporter out of his head. 

Still, Susskind’s conclusion that “we cannot put the AI genie back in its bottle” is probably correct. 

AI nannies are coming, people. Let’s just hope we’re ready. 

SBF Headed to the Hoosgow

Former FTX head Sam Bankman-Fried is spending tonight in jail.

Judge Lewis Kaplan today revoked the former crypto kingpin’s bail for repeatedly testing the court’s patience with his behavior. “He has gone up to the line over and over again, and I am going to revoke bail,” Kaplan said.  

Bankman-Fried’s latest indiscretion was sharing with a New York TImes reporter private correspondence between himself and Caroline Ellison, the former head of Alameda Research whom the government intends to call as a witness in Bankman-Fried’s trial. 

Kaplan said the messages were designed to “portray Ms. Ellison in an unfavorable light” and could possibly constitute a Federal crime. He also agreed with the prosecution’s contention that Bankjman-Fried  “pivoted to in-person machinations” because of court-imposed limitations on his internet and phone use.

According to the judge, Bankman-Fried didn’t send the reporter copies of Ellison’s messages because “It was a way, in his view, of doing this in a manner in which he was least likely to be caught. He was covering his tracks.”

While Bankman-Fried’s lawyers maintained that their client was just using  his First Amendment rights to protect his reputation, Judge Kaplan said that “defendant speech is not protected if it is to bring about a crime.” 

Bankman-Fried’s interview with the Times has been just the latest in a string of run-ins with the court. In January Kaplan tightened Bankman-Fried’s bail restrictions for contacting a former FTX executive who could be a witness in the case and using a virtual private network that concealed his Internet activity. (Bankman-Fried claimed he used the VPN to watch football online.)  

At a July 26th hearing about Bankman-Fried’s interview with The Times,  Assistant U.S. Attorney Danielle Sassoon said Bankman-Fried had conducted 1,000 phone calls with various journalists while under home detention and portrayed the Ellison incident as “an escalation of an ongoing campaign with the press that has now crossed a line.” 

Judge Kaplan apparently agreed. In his ruling, he said that “There is probable cause to believe that the defendant has attempted to tamper with witnesses at least twice.”

Bankman-Fried will be remanded to the Metropolitan Detention Center in Brooklyn, a chronically understaffed facility that Judge Kaplan acknowledged  was  “not on anyone’s list of five-star facilities.”

Trusting X

This week, Twitter switched its logo to a rather spare X, and yesterday, both The Wall Street Journal and The New York Times wrote yesterday about Elon Musk’s reported plan to turn X into a so-called “everything app” combining social media, messaging, payments, and perhaps more. 

Notably, both think turning Twitter into a super app is far from a slam dunk. 

Musk undoubtedly envisions his new X app occupying the same space that Tencent’s WeChat holds in Asia. However, both The Journal and the Times point out that WeChat benefited from excellent timing launching just as China was going digital, primarily via the smartphone. Because WeChat was the only game in town, Chinese consumers became used to doing everything inside one app. By contrast, U.S. consumers have long relied on single-purpose websites and apps.

Also, both the Journal and the Times believe U.S. regulations could significantly slow down Musk’s efforts to add payments and other financial functions to his app. The Journal learned that Twitter has acquired licenses in four states to operate as a money transmitter, but that leaves many more states to go. And the Times speculated that antitrust issues could slow X down, unlike WeChat, which has been actively promoted by the Chinese government. 

And finally, both publications point to the miserable track record of other apps that attempted to take on payments and other types of functionality. These include Facebook, Uber, and Snapchat.

While I agree with everything said, I think both of these articles miss an important point: Twitter/X has a long way to go to restore the trust it lost through Musk’s tumultuous acquisition of the company. According to Matthew Prince, the chief of internet services at  CloudFlare, Twitter’s web traffic is “tanking,” and Threads represents a significant threat if Meta can persuade even a small portion of its Facebook and Instagram audience to use it on a semi-regular basis. 

While Musk may have succeeded in cutting expenses at Twitter, he has also scared away brand advertisers and users through radically denegrating the platform’s content moderation capabilities.

Through rebranding Twitter and raising the possibility of relaunching it as a superapp, Musk has definitely changed the subject. But Twitter / X’s weak foundation could make it difficult for Musk to roll out the new set of services he imagines. 

Tessa

Clearly, eating disorders are a big problem in this country. According to the National Association of Anorexia Nervosa and Associated Disorders, 9% of the U.S. population will have an eating disorder in their lifetime, and one American dies every 52 minutes from an eating disorder. 

An organization called the National Eating Disorders Association (NEDA, for short) tried to provide advice to people about eating issues, but its helpline was often swamped, requiring long wait times. 

While the fact that helpline workers were trying to form a union may have also played a factor in its decision, NEDA decided to close its helpline and roll out a chatbot. The non-profit maintained that it was the most efficient way for it to offer clinically reviewed advice to the scores of people that were flocking to its website. 

What could go wrong … right?

Well, it turns out that NEDA’s chatbot – nicknamed Tessa – started giving advice to people with eating disorders about how they could lose weight. 

As self-described “fat activist” Sharon Maxwell told reporter Julie Jargon in an article in today’s Wall Street Journal, in Maxwell’s chat with Tessa, Tessa quickly advised Maxwell to track her calorie intake and conduct daily weigh-ins and recommended that she aim to lose 1-2 pounds per week.

Contacted by the Journal for her opinion, psychologist and eating-disorder specialist Alexis Conason said Tessa’s advice was “very dangerous” for someone with an eating disorder. 

Also dangerous: the fact that no one seems to know or want to ‘fess up about how Tessa came up with this advice. 

In its original incarnation, Tessa was incapable of thinking on its own: it was supposed to use scripted answers. 

But at some point, it seems, NEDA turned the operation of Tess over to a company called Cass that operates mental-health assistants. Cass Chief Executive Michiel Rauws acknowledged to the Journal that some of its chatbots use generative AI but would not say whether Cass had added generative AI elements to Tess.

Tess could not be reached for comment. NEDA has taken the AI offline … for good, it seems.

Apple’s New Headset: Buy or Bust?

Why is Apple talking about announcing a $3,000 mixed reality headset when it’s clear the $2.7 trillion colossus hasn’t pulled together all of the pieces?

That’s the question The Wall Street Journal poses today in an interesting piece about what would be the launch of Apple’s first new product category since the debut of the Apple Watch way back in 2015.

People who have tried the device say that it’s worlds apart from existing products such as Meta’s Quest Pro, citing its superior performance and immersive capabilities. Also, Apple has designed the device so that users can see what’s around them, which could possibly reduce the nausea that many people feel when they use devices like these. (The company is doing this via outward facing cameras.) And finally, it’s an Apple product, which will undoubtedly mean something to the company’s many diehard  fans.

The rumor is that Apple will announce its new headset at its Worldwide Developers Conference this June.

Still, insiders grumble that the device doesn’t have a killer app, and it will require a battery pack, a bulky addition that probably would have caused Steve Jobs to burst an aneurysm.

And, did we mention that it costs $3,000, three times more than Meta’s most expensive headset?

While the price of Apple’s headset would be a good thing if it had the potential to sell tens of millions of units, few think there is massive, pent-up demand for yet another metaverse or virtual reality product. Indeed, both Walt Disney and Microsoft recently shuttered their respective divisions.

“Apple is absolutely standing on top of the many bodies that are trying to climb up that mountain,” Rony Abovitz, the founder and former CEO of Magic Leap, a much-hyped augmented-reality startup that has fallen on hard times, told the Journal.

What puzzles us is this: Tim Cook is a numbers guy. He doesn’t take a lot of wild bets. Either he is so desperate to show that Apple can innovate without Steve Jobs and Jony Ive, or there is something there. 

As corny as it sounds, we can’t wait to hear him say those magic words, “And one more thing.”

AI Time Bomb

Yesterday, we included a story in the newsletter about a Google researcher who leaked a critique of Google’s AI efforts, complaining that Google was losing ground to open source AI projects.

That story hit home today with the publication of a piece in The New York Times entitled, “The Next Fear on A.I.: Hollywood’s Killer Robots Become the Military’s Tools.”

The military is of course deathly afraid that AI-powered weapons could dramatically accelerate the pace of war, making decisions much faster than humans could control. The ability of artificial intelligence models to pump out disinformation coupled with their susceptibility to hallucinations and misinformation only adds to these fears. 

Up until now, we have been hoping that depriving China of advanced chipsets might delay the use of artificial intelligence by our adversaries. Also, Google’s Bard and OpenAI’s ChatGPT have controls in place that limit public access to dangerous information such as homw to build an atom bomb.

But as the Google researcher points out, Google and Open AI are no longer the only game it town.

“Open-source models,” this person writes, “are faster, more customizable, more private, and pound-for-pound more capable. They are doing things with $100 and 13B params that we struggle with at $10M and 540B. And they are doing so in weeks, not months.”

They don’t need those advanced chipsets, in other words. 

This may sound alarmist, but it’s not only possible but probable that rogue nations like North Korea are exploring how they can embed open source AI into their nuclear weapons systems.

And as the Times points out, “So far there are no treaties or international agreements that deal with such autonomous weapons.”

So enjoy your weekend, everyone. We’ll sort this out … right?!

Pity City for CEOs

Andi Owen, CEO of MillerKnoll, which makes high-end furniture under the brands Herman Miller, Knoll, and Design Within Reach, was trying to explain over Zoom why her employees should  focus on landing a big deal rather than mourning the fact that their bonuses had been taken away.

I had an old boss who said to me one time, you can visit Pity City, but you can’t live there, so people, leave Pity City. Let’s get it done.

Get the damn 26 million. Spend your time and your effort thinking about the 26 million we need, and not thinking about what are you gonna do if you don’t get a bonus. All right.

Let’s get it done. Thank you. Have a great day.

Owen, whose bonus last year was $4 million, later apologized, but the clip quickly went viral. 

In an article in today’s Wall Street Journal, authors Vanessa Fuhrmans and Joseph Pisani portray Owen’s video message as yet another example of the challenges CEOs face in today’s hybrid marketplace.

Although a strong case can be made that Owen is uniquely tone deaf, many chief executives find the pressure of Zoom to be hard to bear, according to Peter Rahbar, a New York-based lawyer who specializes in employment matters.

“You’re always on, there’s no time off, and you have to assume that you could be recorded,” he told the Journal.  

In order to combat potentially damaging leaks, last month, Zoom introduced a new feature that allows an account owner or administrator to watermark a video with a viewer’s email address. Employees at Better.com said they noticed email watermarks on their corporate Zoom calls, but a spokesperson for the company denied that it did this in order to stop viral videos.

(Better.com’s CEO had his moment in the sun in 2021, when he callously fired 900 employees on a video call.)

Regardless of whether you are broadcasting a message to employees over Zoom or talking to them one on one, there is probably never a good time to tell someone who works for you that it is unfair of them to take care of their children while they are supposed to be working, as James Clarke, CEO of Clearlink, a Utah marketing firm, did this week. 

As Bill McGowan, founder and CEO of Clarity Media Group, a communications coach, explained to the Journal, “These are real human beings you need to connect with.”

Still, that seems to be a very hard – and expensive – lesson for many CEOs to learn.

Crypto: Not Dead, Just Resting?

The title of The New York TimesHard Fork podcast paints a grim picture of the crypto industry: “Everyone Pivots to AI, and Bad News for Crypto.”

The subhed is even worse: “Is crypto dead? Or only mostly dead.”

It’s not all bad news. While the price of Bitcoin has fallen 43% over the past year, it has risen almost 35% since January 1st.

Still, it was shocking to read last night that FTX believes it might be missing almost $9 billion in assets.

Now comes word that Tether might have skeletons in its closet, as well. 

Tether is an important linchpin in the crypto economy. It’s eponymous stablecoin is the most widely traded cryptocurrency and provides an important means of liquidity for holders of digital assets. Moreover, its sister company runs Bitfinex, one of the world’s largest crypto exchanges

According to a Wall Street Journal review of emails and documents involving the company, however, both Tether and Bitfinex went to great lengths to mask their identities in order to stay connected to traditional banks and financial institutions. 

In addition to opening banking accounts under different pseudonyms, the companies urged customers to keep the details of these arrangements to themselves. “Divulging this information could damage not just yourself and Bitfinex, but the entire digital token ecosystem,” a client page on the Bitfinex website read. 

Certainly, the Journal’s analysis of Tether’s documents will only spur on legislators who are looking to treat cryptocurrencies as securities. 

And that would be bad news for the crypto world, indeed.