Cross-Platform Notes Taking App

I am not an Apple fan, but I thought I'd give it a try. So, well, I bought an iPad mini 5.

Then I was struggling to get which notebook app to use.

Back in Android, when I was using Galaxy Note 10.1 (2013 version), I used Papyrus, which is now rebranded as Squid Notes. It was a great, great note taking app back then, but it didn't support PDF annotations (the new version, as of 2019, now supports it). It was great also because it has an easy-to-use, worry-free syncing support to any online storage/cloud drives. I could use both Google Drive and Dropbox to sync simultaneously, which is a great, great feature.

Now, I find there are many apps to use on iPad. Notability, PDF expert, GoodNotes 5, etc. Then I tried them all.

Long story short: they are not yet cross-platform.

You may now think, hmm, why don't you use Evernote. Sure thing, I am an Evernote user since day 1. It is improved a lot, and it has a great tagging feature that allows better organizations. However, it still lacks certain functions I needed.

To me, I want the following features to be a good note taking app:

  • Cross-platform
  • Supporting PDF annotations
  • Intuitive design (in terms of easy switching between typing, handwriting, drawing)
  • Powerful organization (in terms of multi-tagging)

I am a little bit disappointed that none of existing apps I have tried fit my need.

Notability is great that it syncs up Audio Recording and Handwriting. But its PDF annotation is hard to use, and you cannot really give your notes multiple tags for a quick search. Also, it does not support two-way syncing from any cloud drives.

PDF expert is great in PDF annotations, and it has great two-way syncing functions (partially cross-platform then). But yea, notes taking is painful.

Squid Notes (on Android) is great in notes taking and syncing, but (back then) it didn't support PDF annotation. It isn't two-way syncing and neither it is cross-platform.

Evernote is great in cross-platform and its easy-to-manage tagging, but its PDF annotation is painful on iPad and Android tablet.

Microsoft OneNote is a great cross-platform product, but it has a very messy design on notes taking.

And finally, none of above mentioned apps have intuitive design in mode switching. Why do you have to click certain buttons to switch between different modes. Why can't you treat each page (or the page, if in infinite scrolling mode) as a free-to-edit PDF page and smartly detect whether user wants to type with their keyboard or write with their pen.

Well, if you know anything better, please let me know..

My Current Temp Solution

I am still using Evernote as I have over 1000 papers managed in it..

But I'm now trying to get away from it, as it keeps increasing its subscription fee and yet still hasn't fixed many things that I need (frustrating annotation, bad formatting).

So my current solution is:

  • iPad: Use Notability only if I need audio recordings syncs with my notes
  • iPad: Use PDF expert to manage all notes (as in PDF format) and other papers and documents, which are synced with my Google Drive (yea, it does not support multi-syncing to two or more drives).
  • All: Use TagSpaces (cross-platform) to organize all PDFs in my Google Drive, which is synced to all computers I have
  • Android: Since I now only have DPT-RP1 as my Android tablet, and I only use it for reading, I just had Google Drive and TagSpaces installed.

Yep.

Wearing VR/AR/MR Headsets Can be Not So Weird

You look like an idiot (in a funny way)

VR/AR/MR has been actively developing these years. We have seen Facebook Oculus leading VR, Google ARCore and Apple ARKit leading AR (not particularly using a head-mounted display but I'm sure there will be one), and Microsoft Hololens leading MR. In the end they could all become a similar thing: user wears some sort of headsets (or glasses if powerful enough) and sees a screen with fancy stuff.

If you are the person wearing the device, it is fine. You feel you are awesome. You live in a virtual or semi-virtual world and see things no one else can see.

But, others see you as an idiot. That you are weird (in a funny way). Refer to the following video:

How to make you look less (funny) weird?

The question is: how can you look not so (funny) weird?

While you can share your screen with friends so they know what you are doing, you still look funny, like (didn't mean to pick Lenovo only):

Image result for lenovo mirage
Lenovo Windows HMD
Related image
Lenovo Mirage Solo has two cameras that suppose to look like your eyes..

Hmm... see where the problem is? The headset hardware itself!

It is bulky. It has to go around your face. And you look very dull (and thus funny somehow..).

Continue reading Wearing VR/AR/MR Headsets Can be Not So Weird

Why don't Google and Samsung team up and make a better smartphone and smartwatch???

Giving Machines the Ability to Think on Their Own

Just some random idea when I took the shower early this morning. Maybe someone already did this. Maybe not.

Lots of us want machines to learn by themselves. This has been studied for years, but there's not a breakthrough yet. Could there be a fundamental problem that prevents us doing so?

When deep learning came out, people went crazy about it. This is possible the future of AI, people think. However, if you look closely, the underlying structure (the number of layers, their types, etc.) all relies on our decisions, the human's decisions. This is not AI.

What if we give the machine the flexibility to also change those structures. We provide the building blocks, and they learn on their own about where to use what.

I thought of Google's AutoML. What it does basically is to automatically try many combinations of models using its powerful backend.

This is dumb but cool. To train neural nets to train neural nets. However, the trained neural network would still be a fixed neural network, meaning it does not evolve.

The most dummy solution is to do similar things like AutoML does, with a reinforcement learning like closed-loop structure. So you want the neural network (that is trained to design other neural networks) to be able to refresh its own memory.

This trained network will be a building block for one particular tasks: designing another neural network for a particular action, like object detection, language translation, etc.

This structure is clearly a layered structure. While it makes sense, most of our thinking system is not really layered. It includes many possible cooperations in different areas of the brain. So if we somehow connect these trained neural networks together, that form a larger mixture rather than layers ones, then maybe the machine could have much more flexibility that enables it to "evolve" - to think on its own.

Hmm..

Maybe.

Regarding foldable phones

There's been a hype about foldable phones. While it's awesome to see screens that can actually fold, and there are softwares supporting this feature, I do not see foldable phones have much future. The key problem is now the phone has to be as twice as big, based on current implementations. Also, there is only one possible changes of the screen display size. This fundamentally limits the concept of a "soft" display.

I'm more believed in scrollable screens. See this year's scrollable TV at CES as a reference. It would benefit much more if the screens can be dragged out more and more. While you may still limit the scrollable length, and fix the scrolling distance for fixed screen size, we may no longer need an additional thickness of the phone. Like the design of Amazon's Oasis, or any scrolls, it would save so much space and the phone will be so much smaller in size.