Skip to main content

Countdown to I/O

This is going to be the second year I've witnessed I/O live(streamed), from school, nonetheless. While I'm hoping Google announces some bombshell that is going to give them some massive monopoly, I, containing bouts of cynicism inside, expect a bit less.

Firebase, Firebase

When Google announced Firebase last year, I shook and shivered with excitement. I thought, "An integrated mobile and web development backend I could use to make anything? Sign me up." Of course, Firebase got better with new features like Cloud Functions, but I don't think Google is done with it - they're not even close. While I know just as much as anyone not at Google about the announcements to take place in less than five to six hours, I'm sure Google is going to announce more integration with their Cloud Platform. Cloud Functions was the beginning of Firebase adding functionality to a "consumerized" cloud, if you will. The rest of Google Cloud Platform will be for anyone, mainly with massive enterprises, but Firebase will be to Google Cloud Platform as Allo sort of is to Hangouts Chat. They use the same backend, but they will serve overlapping but noticeably different target markets.

Android O

We know they're definitely not going to announce that Android O is Android Oreo or any other name at I/O, but we do know another developer preview is going to be released. I'm curious to see what features the Android team managed to stuff in this time, but O looks like it's going to be the M of Android updates unless something major is announced. Notification channels are a great feature, but it's nothing that will make people demand OEMs release updates quicker. Speaking of updates, Project Treble does look like it will increase the speed of the Android update process.

Artificial Intelligence

Google loves their AI, and I expect it to be the obvious focus of I/O. TensorFlow is going to be showcased in some talks, but I think Google might have a major announcement which makes AI more accessible to developers. Sure, that's vague, but Google has been on a trend of sharing AI with more people to hopefully create another breakthrough. 

While I gave some pretty vague predictions, we all know the gist of what's happening this year. I'll be happily watching the livestream at 12:00 PM Central Time for the unfortunate 20 minutes I can and check in every passing period to see the updates from the talks I care about. Afterwards, there's sure to be a stream of I/O talks on YouTube for me to cast to my TV to binge watch. Good morning to all the media people attending I/O, and I hope you all enjoy your time there. (I prefer to get my news primary source, thank you.)

Comments

Popular posts from this blog

Summer Break 2017, Day 2 of 83: All Hail the Schedule

I think the plan's working; I'm already motivated to accomplish everything I've planned in The Schedule . Thanks to Google Calendar, I have the flexibility to change what I do on a daily basis. (I know, it's like I'm a spokesperson for Google right now, but you haven't seen half of it.) With Calendar's goals feature, I specify frequency and position of goals I want to accomplish, and machine learning ensures the times work out for me. Sure, it's a bit finicky right now, but at least I didn't have to make a hundred something event times for goals that don't have entirely consistent definite start and end time. MOOCs and More Because of my existing knowledge and experience with Udacity , I've decided to use their online courses to enrich my currently unstructured learning. Here's everything scheduled to be completed during the summer: Introduction to Machine Learning (the big one, the real thing I want to accomplish) Introd...

My First AP Test

In around 10–15 minutes, I will begin the AP Physics 1 exam. It's questionable whether I'll obtain a score of five, but I know I can easily obtain a four. Does MIT or Caltech or Stanford or whoever care if I obtain a five? Well, I know MIT doesn't even care if I take the test as they only accept credit for a five on the AP Physics C exam. As for the others, I probably should've done some research. That doesn't matter now. I just looked over my mock test with another highly intelligent student, and we both know we can easily obtain a four. I know how torque works; I know how movement in two dimensions works; heck, I even remember​ how to build a DC circuit. Kirchoff has nothing on me. I know that the junction rule states that a circuit's input current must equal it's output. I understand that resistors have the same current in a series but the same voltage in parallel. I am going to perform very well. (As long as I don't bomb the free-response quest...

Summer Break 2017, Day 5: How Hard is it to Upload a Photo to Firebase?

God help you if you ever decide to implement camera functionality in Android. I didn't have much planned today, but thank goodness I didn't. The Setup Here's the dilemma: I wanted to make a very simple app that will let me take a photo of text and have it read out to me. Using the Google Cloud Vision API ,  I can essentially scan documents and listen to their contents instead of having to use my eyes and scan the thing. It will be great for accessibility and so on, but the thing is I can't have the Cloud Vision API scan documents that I haven't taken. I want to do processing in the cloud to model common app architecture and to reduce strain on the client app. Here's the service flow: Client app takes photo Client app uploads photo to Cloud Storage for Firebase Cloud Functions scans the document for text Cloud Functions updates Firebase Database with scanned text Client app intercepts database update Client app speaks text from database The Co...