This is a short post. Basically I had a data set come in, where there were some funky characters involved. I was getting “Can’t read this; doesn’t appear to be UTF-8”. Looked around on stackoverflow for a while to little avail. I came up with this, which works.
So this is an interesting problem. You are collecting data from somewhere and you want to feed it into a neural network for classification. There is one main problem with this. The shape of the data! Neural networks and really just anything require specifically shaped data, you can’t just like give it something of ambiguous size. There are tons of papers out there on dimensionality reduction, but nothing on dimensionality reduction to a specified size. This article explains my approach.
This article is loosely based on a time series challenge from customer data. I have fabricated 3 data files such that they represent the same challenge and we will go through the process of discovering that data. The primary challenge in this data set is that it is from a sleep study and the researchers left the date portion of the time stamp off. What this means is that at midnight, the data plots at the beginning of the x-axis. The second challenge is lining up data to see if there is anything interesting with the time. So yes, you can simply plot using the index that python generates, however I’m also interested in the actual time itself as this is a study involving humans.
So today, I was asked to put some thought into what we should focus our entry level data scientists on in terms of tech skills. After I put a bunch of thought into it, I ended up coming up with this. I decided that the most important aspect of this was a few items fold
Don’t overload them
Can deliver to production where the target can be anything, including IoT.
They will not be concerned with building front ends.
This article is a high level discussion on where you might use various Microsoft Technologies in the field of robotics. I will begin with a side pet project I’m kicking off to get more familiar with some cool tools and tech I’ve lately discovered so I can hopefully get assigned to some really cool projects at work, including drones.
I’m writing this article because believe it or not, this process is a pain in the neck and not completely documented in any one place. Lets start with why in the world you would want to do this. For me, I want to use Tensor Flow and NVidia embedded robotics SDKs. Unfortunately the only supported dev environment for this is Ubuntu. Not anything against Ubuntu it just appears to be fairly unstable in comparison to Mac and Windows, but that is neither here nor there, if you want to build intelligent robots, you need these tools.
This article is meant to explain how the K-Means Clustering algorithm works while simultaneously learning a little Python.
What is K-Means?
K-Means Clustering is an unsupervised learning algorithm that tells you how similar observations are by putting them into groups or “clusters”. K-Means is often used as a discovery step on new data to discover what various categories might be and then apply something such as a k-nearest-neighbor as a classifier to it after understanding the centroid labels. Where a centroid is the center of a “cluster” or group.
So I’ve spent a while now looking at 3 competing languages and I did my best to give each one a fair shake. Those 3 languages were F#, Python and R. I have to say it was really close for a while because each language has its strengths and weaknesses. That said, I am moving forward with 2 languages and a very specific way I use each one. I wanted to outline this, because for me it has taken a very long time to learn all of the languages to the level that I have to discover this and I would hate for others to go through the same exercise.
So here we go with another recap. This week we did a deep dive into binary classification using Logistic Regression. Logistic regression and binary classification is the underpinnings for modern neural networks so a deep and complete understanding of this is necessary to be proficient in machine learning.
Sigmoid really isn’t that complicated (once your understand it of course). Some back knowledge in case you are coming at this totally fresh is that the Sigmoid function is used in machine learning primarily as a hypothesis function for classifiers. What is interesting is that this same function is used for binary classifiers, multi-class classifiers and is the backbone of modern neural networks.