Humane AI – Pico Laser Projection – $230M AI Twist on an Old Scam

Quick Note: I’m Going to CES and SPIE AR/VR/MR 2024

Before getting to the article, I wanted my readers to know that I plan to attend CES from January 9th through the 12th. My schedule is just forming up as of this writing, so I currently have many time slots available. Most AR/VR/MR-specific companies exhibiting publicly will be in LVCC Central Hall or the Venetian like last year. While I am primarily there to meet with companies and see products, I always have some time to meet with blog fans.

I will also attend the SPIE AR/MR/VR conference from January 29th through the 31st. It is much easier to meet at this conference as everything is in one building for the conference and exhibits.

If you want to meet up with me at either event, please email me at meet@kgontech.com

Introduction

Humane AI (Humane, hereafter) combines the mid-2010s failed concept of using a laser projector to project on the body with the early 2000s failed projector phone (something I wrote about in 2013), only they left out the phone’s flat panel display and have more feeble processing than a good smartphone. Rational people wonder what this does that a good smartphone can’t do much better, and you can count me as one of these people.

This blog has been written about various laser projector scams since the beginning of 2011. Scammers like to associate “laser” with near-mystic powers that violate all the laws of physics and rational thought. The other favorite word to deceive people is “Hologram” (when they are not). The new favorite buzzword is “AI .”

It looks like Humane started with an abysmally poor-quality laser projector in a phone-like device, and by saying it does “AI,” it is magically something new. Their picture was good enough to raise $230 million from SK Networks (Korea), Microsoft, LG, Volvo, Qualcomm, Open AI founder Sam Alman, and others. Granted, with so many big-name companies, the stakes were chump change for each giant company giant, but it does show foolishness on a grand scale.

Humane Adds AI to the Laser Projection (Hype Squared)

At first glance, Human AI looks like it started with a monochrome laser projector, and realizing that that wouldn’t sell venture capital companies (VCs), they wrapped it in the current hype surrounding AI. Humane co-founders Bethany Bongiorno and Imran Chaudhri (left) have the “Steve Jobs” black shirt costume down like other famous Silicon Valley scammers, Elizabeth Holmes, founder of Theranos, Sam Bankman-Fried founder of FTX, and Ronnie Abovitz, founder of Magic Leap. I have learned to be suspicious of the Steve Jobs pretender look through the years; it’s not a complete tell, but it should make you suspicious.

Humane AI Founders (left), Elizabeth Holmes, San Bankman-Fried, Ronnie Abovitz, and Steve Jobs

This blog has documented for over 12 years how laser scanning is a terrible way to generate a display image. In short, the scanning process is too slow and inaccurate to generate a high-resolution image, and the lasers can’t be controlled fast and accurately enough to give good color depth, not to mention the poor power efficiency.

The fundamental problems with pico projection include:

1. Where will you project from? Humane’s “big concept” is that they will magnetically pin the projector to your shirt or jacket, ignoring that it will sag a normal shirt and flop around as one moves. For the Humane’s introduction video, they pinned it on a heavy leather (or leather-like) jacket to give support (what do you do in the summer?).

2. What and where will you project on? Humane’s “big concept,” taken from laser projector scams of the past (more on this later), is to project on the skin. But the skin is a lousy, non-smooth projection screen with crevices and poor reflective qualities. Worse yet, it requires a person to hold their hand up wherever the projector happens to be pointing as it flops around hanging on your “shirt.” Unlike a phone display, you can hold where you want or sit down.

3. Ambient light is the biggest killer for pico projectors due to the loss of contrast. Humane has not specified the projector’s brightness (in lumens), but it has a class 2 laser, so it can’t be that bright when projecting an image compared to the brightness of daylight. Humane demonstration videos are “staged” to put the hand’s projection surface in shadow or a dark environment. The Humane display is guaranteed to be invisible in outdoor daylight, about 100 times brighter than typical indoor lighting. Unlike flat panel displays that absorb ambient light, projection displays directly compete with it.

4. Humane’s projector lacks color and grayscale depth. Humane’s display a monochrome cyan (blue-green). There is no ability to even highly anything with color. Secondly, they demonstrated very limited grayscale depth; they showed just “on,” “off,” and a half-level. Even if grayscale is theoretically possible with the Humane projector, the ability fo see a grayscale image is severely hampered by using skin as a screen and the lack of contrast due to ambient light.

5. Laser projection’s focus-free is a double-edged sword. While using laser scanning allows the projected image to be “focus-free,” it also means that the smallest objects blocking the projected image cast a harsh shadow blocking part of the image.

Humane’s Poor Image Quality & 720p Resolution Lie

Human claims to have a 720p (1280 x 720 pixel) monochrome resolution. All laser projector resolution claims I have seen through the years lie about their resolution, typically by about 2x horizontally and vertically. But Humane seems to have topped them by claiming about 4x of the horizontal and vertical resolution they can deliver. A picture of a 502- x 410-pixel Apple Watch Ultra 2 is on the right. Humane’s image quality is worse than a cheap 1960s B&W TV (see the series of pictures below).

Humane’s website and other promotional material have a series of fake still images (“Photoshopped” images overlayed on hands). While even these pictures are pretty low-resolution (much less than a true 720p image), they turn out to be much better than what is seen in the videos of the Humane projector. I fear that AI will soon make it easy to make fake images and videos that are less obvious, but these fakes from Humane did not use its “AI” technology.

The images below are still frames from the Humane Introduction Video, “This is the Humane Ai Pin.” In addition to the lower resolution and contrast than the fake stills above. Notice how the crevices in the skin break up the image and don’t just darken it. In the the third from the left still below, the hand has been turned upward, catching more light (but nothing compared to a bright room or sunlight); notice how the image looks even more faded due to the ambient light.

The second set of video stills is from a demonstration by Bethany Bongiorno, Humane Cofounder. These videos were taken in a much darker environment, so the contrast of the projection is better. I want to see a demo in daylight (not in the shade). This video also shows how awkward it is to use your hand as a display screen and a controller and how the hand blocks the image.

This image has an empty alt attribute; its file name is 2023-Humane_AI-X-Video-Stills-from-X-copy-1024x312.jpg

But, but, but, it is all about voice recognition and speech.

Voice recognition has been on smartphones for about a decade with Siri (2011) and Alexa (2014). People have all experienced the good and bad of voice input and output. Certainly, voice can be a good adjunct input, but it can also be very frustrating and difficult to correct when it does not understand you correctly. AI will likely improve the recognition, but it will not be perfect. In many situations, voice input is insecure, disruptive, and/or impractical (due to, say, noise or wind). What is one supposed to do with the Humane device in such situations?

Humane’s introduction video was a fiasco in that it highlighted the combined voice and image recognition problems. The founders looked stilted, trying to time everything and speaking slowly with preplanned questions to the “AI engine,” and sometimes the device gave erroneous answers. When asked about the best place to see the next solar eclipse, it gave a location too far south, and when asked to estimate how much protein is in a small number of almonds, it said that “half a cup has 15 grams.” A spokesperson for Humane has responded that it got the date correct on the eclipse but that the location was a “bug” and that while the “AI” missed the question’s intent on the almonds, the answer was technically correct on the grams per cup.

The AI answered like Cliff Clavin (bar know-it-all from the TV series Cheers who constantly gave erroneous information but showed full confidence in what he was saying – example, video clip). Apple uses “Siri” to trigger their audio response; maybe Humane could use “Cliff.”

Just for fun, I asked my iPhone’s Siri, “Where is the best place to watch the next solar eclipse?” and it came back with a series of links to articles (that included color pictures and maps) that I could scroll through on my phone but would never be able to read on Humane’s terrible display or would want read back to me.

In the real world, speech recognition is much more complex than Humane’s simplistic examples. The environment is going to be more complex and often noisy. When speaking, people are not careful to say each word clearly, particularly when they have to whisper to avoid disturbing the people around them too much. As has been widely discussed, AI will often answer questions with wrong information.

Old Scam with Slightly New Twist

Back in the days before Streaming, DVDs, and VHS tapes, the Disney company found they could rerelease their animated features every seven years, and a new generation of kids would think they were new. Disney then followed a similar pattern and put their VHS and later DVD releases in the “Disney Vault” between rereleases.

This pattern of letting memories fade away before reintroducing them as “new” also holds in high-tech. In 2016, this blog wrote about eyeHand in Wrist Projector Scams – Ritot, Cicret, the new eyeHand. These companies showed some comically bad fake images that included “projecting black,” violating all laws of physics.

The scams by the small companies got big companies like Dell and Hilton to buy into the scammer’s marketing hype, and they released several “concept videos” seemingly based on these scams (see below from that same article).

In 2018, Haier and ASU revisited the concept (see the 2018 CES Haier Laser Projector Watch – (Wrist Projector Scams Revisited)). Haier showed a prototype at CES 2018, enabling me to photograph the difference between the less obviously fake images and their real appearance (see article for more details).

Conclusions

Once you get past the contrived demonstrations and crappy displays, Humane’s problem is convincing real people that what they have is better than a smartphone. A modern smartphone has a better onboard processor and, orders of magnitude, a better display, supports audio and video input, and can communicate with the internet for AI or related speech recognition and answers. A phone does not force one to wear clothes that will be able to support it or make me hold my hand out to see a crappy image.

Few will believe a startup company has better speech recognition and information database technology than Google, Apple, Microsoft, and other giant companies. If Humane really had better technology, why didn’t they release it as a software application rather than developing this terrible device?

Humane has no advantages and massive disadvantages compared to a smartphone, and it seems most people in technology have come to the same conclusion based on my reading of the responses. There are a few “everyone gets a trophy” and “anything is possible” people who have an open (or empty?) mind to this concept.

How so many business people could get their companies to fork over a combined $230M confounds me. I’m waiting for the AI program that can explain it . . . Not really; it is probably a combination of ego, unwillingness to do good due diligence, and fear of missing out (FOMO), all very human problems. These same “businesspeople” will run most startups through the wringer for a few million in VC money and then hold a cash bonfire with Humane AI and Magic Leap (another company that originally told investors that a key technology was laser projection). Thankfully, at least this time, they are not stealing money from individuals on Kickstarter or Indigogo like Ritot, Cicret, and eyeHand did, even if the amount is two orders of magnitude higher.

Karl Guttag
Karl Guttag
Articles: 260

18 Comments

  1. Karl, for they voice recognition, citing Alexa and Siri is fairly outdated. Have you tried chatGPT on iOS or Google Gemini ? They do understand you, not just from an simple speech to text standpoint, but very much so contextually.

    This Gizzmo is nothing more than a LLM / AI front end – you can have the same on an iPhone, pretty soon on an via watchOS too…

    • SaKiE, I was not clear but I was using Siri and Alexa “figuratively.” They can roll in better capability as time goes on including more/better AI processing of speech.

      The key point is that speech input in general while very useful in some situations, has problems in others. There are many cases where you can’t talk. Also, many times the response is better to be with visual or written information, particularly if there may be multiple options.

      But the main question is what can the Humane AI pin do that a cell phone combined with a smartwatch couldn’t do better?

      • Nothing, I am in agreement with that. The only point of the device is, to create hype, get some investment and then fall apart…

  2. you took it easy on them tbh. i still cant believe this product exists and that they got so much money for it. whoever wrote up their VC is a genious or VCs have more money than sense. even if we were to ignore all the obvious technical limitations, you’d have to consider the uphill climb that is getting users to stop using their phones which is for many an addiction. and as you mentioned, Google and Apple both offer phones + watches + earbuds which together offer a much stronger solution for what humane says is a “problem/need”. the easiest scam to sell in silicone valley must definitely be “we’re building the next big thing after the smartphone”.

  3. Yeeep. My reaction nearly a year ago, when the info I had was a single promo picture:

    01/16/2023 8:52 PM
    Oh, another spin through that grift. Admittedly it doesn’t have the challenges of a watch form factor but much of https://kguttag.com/2018/02/17/2018-ces-haier-laser-projector-watch-wrist-projector-scams-revisited/ still applies. Useless and unpleasant, actively dangerous to those around you, or both. Don’t worry, their bandwagon proprietary neural network sauce will always know where your hand is – AIs are really good at understanding hands bro – and definitely won’t zap ten people’s retinas in a millisecond like a Tesla on a toddler.

    $100 million JFC.

    • A few things:
      1. At least one person (https://xrgoespop.com/home/humane) back a year ago caught that Humane AI was more a less an integrated version of Pranav Mistry and Pattie Maes MIT’s Sixth Sense project. Mrunal Gawade on LinkedIn pointed out Sixth Sense as a comment on my post (https://www.linkedin.com/in/karl-guttag-a3b890/recent-activity/all/)
      2. I included a pointer and link to the article on the ASU/Haier watch in the article
      3. I don’t think at the power of their laser which they say is Class 2 that there is any significant eye danger risk.
      4. The $100M was just the latest funding round. In total Humane AI has raise $230M

  4. About the only thing they have right is recognising that limited pixel count leads to limited options for icon/interface, and that carefully encoding ‘meaning’ becomes very important. And that a single colour has fewer problems with focus and contrast. The rest is entry-level stuff.

    I thought that NUVIZ HUD (now a few years ago) did a great job of delivering a bit of ‘Goldilocks’ solution (functionally not too much, not too little) for motorcyclists. The problem is of course that they went out of business, the firmware lapsed and you were left with a $650 paperweight. So the question is when will the Humane AI pin turn into a $699 paperweight…?

  5. I have no intention to defend this company or its product, but I do wish there was more conversation around the concept of what information we humans desire to be provided to us in different contexts, and what data from our surroundings we want our devices to use. Example, for planning a walk a (big screened) iPad is better than an iPhone, but once we are on the walk then a (hands-free) Watch is the better device to tell us to Take Next Left. For many purposes, a full colour high-resolution screen is overkill – we just want to know the time, time to the next train, which platform to head to, which direction to walk in. Even for such simple but invaluable information, I struggle to see why projecting into a user’s hand is preferable to a smart watch.
    Now, what an Apple Watch can’t do but Humane’s device can, is to use a camera to get visual information about a user’s environment (yep, just like any smartphone. Humane’s device is hands free, but a smartphone could be perched in a jacket pocket with it’s camera facing outwards too). How useful an outwards-facing camera is is debatable, but it certainly isn’t essential since every reader here has reached adulthood without such a gadget! That said, I recently found myself using my phone camera in Google Translate a lot whilst in France. I could imagine a body-mounted camera could be convenient when shopping (user picks up a book, camera reads barcode, the device tells them the Goodreads review score) or in warehouse stock control. But again, this could be done with any smartphone in a suitable case with the right software. (Or even a Google Glass head-mounted device, haha)
    Tl;dr there is still virtue in exploring human/internet interfaces that aren’t phone-based, even though this particular implementation is flawed.
    As ever, thank you and keep up the good work Mr Guttag!

    • Even for such simple but invaluable information, I struggle to see why projecting into a user’s hand is preferable to a smart watch.

      For every “feature” of the Humane AI, the task could be better done with existing devices (smartphone, earbuds, and smartwatch).

      Not only does a watch have better resolution and color, it will work in daylight (something I doubt the projector can do), and you don’t have to hold it right in front of you at the correct distance for the projector hit (as it flops about on your shirt or jacket).

      That said, I recently found myself using my phone camera in Google Translate a lot whilst in France. I could imagine a body-mounted camera could be convenient when shopping (user picks up a book, camera reads barcode, the device tells them the Goodreads review score) or in warehouse stock control.

      The other HUGE factor with the Humane AI camera is that you can’t really aim it except by pointing your whole body and you still don’t know exactly what it is looking at. Like you, I have used my smartphone to translate signs while traveling, I can’t imagine how you would do that with signage. Would it be reading a lot of stuff you don’t want in hope to catch the stuff you want? Not to mention how inappropriate would be to have it blathering away in some places.

      The amazing thing is that this concept that is so full of obvious major flaws raised $230M.

      • Absolutely, Mr Guttag, I agree on all points. And it may reassure you that when popular technology blogs have run articles about the Humane device, the hundreds of comments left by members of the public have also been near unanimous in their disparaging of the device.
        What I am interested in is alternative forms of user/internet/environment interaction in general, not Humane in particular.
        As such, I’m interested in the answers that a hundred people might give if asked to ponder the question “What tasks would be easier/possible if a smartphone was fixed to your jacket, camera pointed outwards?”. They might first think of barcodes and QR codes, then of text, then of the improvements of ML image recognition, they might think of users with only one hand, or situations where a user requires both hands.
        It might be that picking up a French book in one hand and running a finger along the line of the text you want translated into English through an earbud is a convenient user interaction. Or maybe it isn’t – we can’t know until we at least consider it.
        If consumers consider and discuss such questions and potential use-cases, they will be less likely to be taken in by products that don’t meet their expectations. Such discussions would also bring a greater breadth of life experiences to the attention of product designers.
        Of course, such discussions should be based on solid foundations of what various technologies are actually capable of, which is why your objective testing and analysis is so invaluable. Cheers!

  6. Thank you Karl. You Are correct about the mount, the one we use for our police body camera uses a steal plate and several rare-earth magnets to hold it in place, just pinning it to your tee shirt would not work very well.

Leave a Reply

Discover more from KGOnTech

Subscribe now to keep reading and get access to the full archive.

Continue reading