There's been a hype about foldable phones. While it's awesome to see screens that can actually fold, and there are softwares supporting this feature, I do not see foldable phones have much future. The key problem is now the phone has to be as twice as big, based on current implementations. Also, there is only one possible changes of the screen display size. This fundamentally limits the concept of a "soft" display.
I'm more believed in scrollable screens. See this year's scrollable TV at CES as a reference. It would benefit much more if the screens can be dragged out more and more. While you may still limit the scrollable length, and fix the scrolling distance for fixed screen size, we may no longer need an additional thickness of the phone. Like the design of Amazon's Oasis, or any scrolls, it would save so much space and the phone will be so much smaller in size.
So far we have seen so many machine learning (ML) papers in computer vision, from object detections to activity recognitions. There have been tons of papers doing the good, and the same that many are doing the bad thing, namely adversarial ML. Many papers are now following the trend to attack existing well-trained ML models. One example is by wearing a glass with special colors designed, the guy can be recognized as someone else, by machines (shown as the figure below). You can find many many papers on this topic. Continue reading Adversarial ML in Wireless?
Just an idea. Today, machine learning or deep learning largely rely on the data, the volume of data. If we have millions of images of a single object, we can train the model and eventually approximate a function that maps the image into the object. The resulting model could be complicated, requiring multiple layers of neurons and requiring days to months to train the model.
What if the complexity of modeling and training is caused by incomplete data. Here I do not mean that millions of images of a single object are not enough, are incomplete. I meant, if it is true that image of a single object is incomplete. For example, when human see a dog, running on the ground, we may obtain additional information about the dog to recognize that it is a dog, like from sound, e.g., dog barking.
Smartphones and wearables are now waterproof. This is great. It means we can shower without taking off the smartwatch; we can swim but won't miss messages/notifications; and we can surf and still pick up phone calls even if the device dropped in water.
All these sound great, but touch screens are water-unfriendly. Whenever there is some water on the screen, the touch screen just went nuts. Either it is unresponsive or it randomly clicks like there is a ghost. Water affects the screen capacity change caused by human skin and destroy the functionalities of the touch screen.
Google Home is coming out today. It is getting super excited to see that Google is using its cutting-edge technology on smart home, just like Amazon Echo and Samsung Atrik. What is interesting about Google Home is that we see many other opportunities other than just voice commands.
While security is surely one thing (and one big thing of course) in these smart home devices (or precisely in the IoT development), I thought of one thing in particular: sensing. Although it is not limited to Google Home, here I use it as an example. Continue reading Google Home and Project Soli = ?
I was having this idea for a long long time. Yesterday my friends and I were talking about it again. Well, apparently others have done so, if you Google it. Below is a very short list of things I found online (there are tons of apps and companies doing so).
But it occurs to me to understand why this method isn't so popular. Many companies are making expensive security cameras, and people still buy them. Maybe because we trust things that are dedicated to what they are made for? Then I see that Yi (belonged to XiaoMi) released low-cost security camera (around $30), and a lot of people like it, including me. I realized that it isn't we don't trust low-cost solutions to leverage old smartphones for surveillance. We like them and buy them because we simply don't care, because most of us simply like new stuff.
No matter how easy the setup is for the whole "using old smartphone as monitoring system," old stuff is old. We no longer want to touch the old phones. Maybe they are painfully slow. Maybe they have been sitting in the dust for so long. Maybe some functionality in the phone do not work any more. Maybe they carry so many memories and emotionally we do not want to revisit. And we move on, we get new phones, why do we turn back?
Just finished my cruise trip to Mexico with my gf and it was a fantastic experience, though I had to read around 100 papers and organize them for finding my potential research directions and wrote the journal.. I managed to do so before the start of new year and enjoyed the trip at the same time.. phew..
Anyway. After reading the papers about sensing and wireless at MobiCom, MobiSys, SenSys, NSDI, HotMobile from 2012 to 2015, I find the interesting phenomenon. We always claim that we can do this by using that. Taking "localization" as an example, it's been studied for years and people use all kinds of technologies (e.g., FM, Wi-Fi, RFID, sound, geomagnetic, visible light, 60GHz, etc.) to accomplish meter-level or cm-level or even mm-level accuracy. Of course, they are done by assuming various kinds of scenarios. And we show that we can do it. Most introductions would look like: Continue reading From "We Can" To "We Should"