Skip to main content

Summer Break 2017, Day 4 of 83: Short notes

What did I accomplish today? I finished my lesson on naive Bayes, learned about linear dependence, and began learning about graph-based algorithms.

Because of my unfortunate procrastination and a request from a family member to do some unexpected work, don't have too much time today to document my experiences. I'll likely produce a follow-up or two on Saturday to compensate.

Other news

Today, the third developer preview for Android O came out. I've been reluctant to join the Android Beta Program, though. My Nexus 6P already dies around 40%, and I don't want to tempt the gods of battery life to increase that number. Sure, Android Nougat already introduced improvements to Doze, and Android O will put background execution limits for services, but I know the real problem is likely the hardware. Because my phone is from Best Buy, and my benefactor didn't get extended warranty, I'm essentially screwed until I get a new phone. (Hopefully, that new phone is a Pixel 2 that I receive for Christmas, but I'm not too optimistic about that.)

On a side note: I really despise the new emoji. I've confirmed it's not just because they're new; the icons go against everything Android stands for, and, based on a quick questioning from my family, the old icons actually appear better to the eye. I'm disappointed the Android emoji designers aren't taking the philosophy of "Be together, not the same" to heart. I don't want an Android phone to use iOS emoji. I want Android's unique brand identity to show through.

My decision: I'll wait until next week for any blaring issues to surface, and then I'll give in to the terrible cat emoji.

I have some data to sift through and manually input onto paper. Have a good night.

Comments

Popular posts from this blog

Summer Break 2017, Day 2 of 83: All Hail the Schedule

I think the plan's working; I'm already motivated to accomplish everything I've planned in The Schedule . Thanks to Google Calendar, I have the flexibility to change what I do on a daily basis. (I know, it's like I'm a spokesperson for Google right now, but you haven't seen half of it.) With Calendar's goals feature, I specify frequency and position of goals I want to accomplish, and machine learning ensures the times work out for me. Sure, it's a bit finicky right now, but at least I didn't have to make a hundred something event times for goals that don't have entirely consistent definite start and end time. MOOCs and More Because of my existing knowledge and experience with Udacity , I've decided to use their online courses to enrich my currently unstructured learning. Here's everything scheduled to be completed during the summer: Introduction to Machine Learning (the big one, the real thing I want to accomplish) Introd...

My First AP Test

In around 10–15 minutes, I will begin the AP Physics 1 exam. It's questionable whether I'll obtain a score of five, but I know I can easily obtain a four. Does MIT or Caltech or Stanford or whoever care if I obtain a five? Well, I know MIT doesn't even care if I take the test as they only accept credit for a five on the AP Physics C exam. As for the others, I probably should've done some research. That doesn't matter now. I just looked over my mock test with another highly intelligent student, and we both know we can easily obtain a four. I know how torque works; I know how movement in two dimensions works; heck, I even remember​ how to build a DC circuit. Kirchoff has nothing on me. I know that the junction rule states that a circuit's input current must equal it's output. I understand that resistors have the same current in a series but the same voltage in parallel. I am going to perform very well. (As long as I don't bomb the free-response quest...

Summer Break 2017, Day 5: How Hard is it to Upload a Photo to Firebase?

God help you if you ever decide to implement camera functionality in Android. I didn't have much planned today, but thank goodness I didn't. The Setup Here's the dilemma: I wanted to make a very simple app that will let me take a photo of text and have it read out to me. Using the Google Cloud Vision API ,  I can essentially scan documents and listen to their contents instead of having to use my eyes and scan the thing. It will be great for accessibility and so on, but the thing is I can't have the Cloud Vision API scan documents that I haven't taken. I want to do processing in the cloud to model common app architecture and to reduce strain on the client app. Here's the service flow: Client app takes photo Client app uploads photo to Cloud Storage for Firebase Cloud Functions scans the document for text Cloud Functions updates Firebase Database with scanned text Client app intercepts database update Client app speaks text from database The Co...