VR/AR/MR has been actively developing these years. We have seen Facebook Oculus leading VR, Google ARCore and Apple ARKit leading AR (not particularly using a head-mounted display but I'm sure there will be one), and Microsoft Hololens leading MR. In the end they could all become a similar thing: user wears some sort of headsets (or glasses if powerful enough) and sees a screen with fancy stuff.
If you are the person wearing the device, it is fine. You feel you are awesome. You live in a virtual or semi-virtual world and see things no one else can see.
But, others see you as an idiot. That you are weird (in a funny way). Refer to the following video:
How to make you look less (funny) weird?
The question is: how can you look not so (funny) weird?
While you can share your screen with friends so they know what you are doing, you still look funny, like (didn't mean to pick Lenovo only):
Hmm... see where the problem is? The headset hardware itself!
It is bulky. It has to go around your face. And you look very dull (and thus funny somehow..).
There's been a hype about foldable phones. While it's awesome to see screens that can actually fold, and there are softwares supporting this feature, I do not see foldable phones have much future. The key problem is now the phone has to be as twice as big, based on current implementations. Also, there is only one possible changes of the screen display size. This fundamentally limits the concept of a "soft" display.
I'm more believed in scrollable screens. See this year's scrollable TV at CES as a reference. It would benefit much more if the screens can be dragged out more and more. While you may still limit the scrollable length, and fix the scrolling distance for fixed screen size, we may no longer need an additional thickness of the phone. Like the design of Amazon's Oasis, or any scrolls, it would save so much space and the phone will be so much smaller in size.
So far we have seen so many machine learning (ML) papers in computer vision, from object detections to activity recognitions. There have been tons of papers doing the good, and the same that many are doing the bad thing, namely adversarial ML. Many papers are now following the trend to attack existing well-trained ML models. One example is by wearing a glass with special colors designed, the guy can be recognized as someone else, by machines (shown as the figure below). You can find many many papers on this topic. Continue reading Adversarial ML in Wireless?
Just an idea. Today, machine learning or deep learning largely rely on the data, the volume of data. If we have millions of images of a single object, we can train the model and eventually approximate a function that maps the image into the object. The resulting model could be complicated, requiring multiple layers of neurons and requiring days to months to train the model.
What if the complexity of modeling and training is caused by incomplete data. Here I do not mean that millions of images of a single object are not enough, are incomplete. I meant, if it is true that image of a single object is incomplete. For example, when human see a dog, running on the ground, we may obtain additional information about the dog to recognize that it is a dog, like from sound, e.g., dog barking.
Smartphones and wearables are now waterproof. This is great. It means we can shower without taking off the smartwatch; we can swim but won't miss messages/notifications; and we can surf and still pick up phone calls even if the device dropped in water.
All these sound great, but touch screens are water-unfriendly. Whenever there is some water on the screen, the touch screen just went nuts. Either it is unresponsive or it randomly clicks like there is a ghost. Water affects the screen capacity change caused by human skin and destroy the functionalities of the touch screen.
Google Home is coming out today. It is getting super excited to see that Google is using its cutting-edge technology on smart home, just like Amazon Echo and Samsung Atrik. What is interesting about Google Home is that we see many other opportunities other than just voice commands.
While security is surely one thing (and one big thing of course) in these smart home devices (or precisely in the IoT development), I thought of one thing in particular: sensing. Although it is not limited to Google Home, here I use it as an example. Continue reading Google Home and Project Soli = ?
I was having this idea for a long long time. Yesterday my friends and I were talking about it again. Well, apparently others have done so, if you Google it. Below is a very short list of things I found online (there are tons of apps and companies doing so).
But it occurs to me to understand why this method isn't so popular. Many companies are making expensive security cameras, and people still buy them. Maybe because we trust things that are dedicated to what they are made for? Then I see that Yi (belonged to XiaoMi) released low-cost security camera (around $30), and a lot of people like it, including me. I realized that it isn't we don't trust low-cost solutions to leverage old smartphones for surveillance. We like them and buy them because we simply don't care, because most of us simply like new stuff.
No matter how easy the setup is for the whole "using old smartphone as monitoring system," old stuff is old. We no longer want to touch the old phones. Maybe they are painfully slow. Maybe they have been sitting in the dust for so long. Maybe some functionality in the phone do not work any more. Maybe they carry so many memories and emotionally we do not want to revisit. And we move on, we get new phones, why do we turn back?