So today we will do a quick conversion from mathematical notations of Algebra into a real algorithm that can be executed. Note we will not be covering gradient descent, but rather only cost functions, errors and execution of these to provide the framework for gradient descent. Gradient descent has so many flavors that it deserves its own article.
So I’ve been working on building some interesting visualizations with open data. Today I get to show off a really interesting one, not only will we discuss the visualization in depth, but also dive into how I built it. And here it is, the top 10 bookings in Miami where the legend is in descending order for most common bookings holistically.
Here is a recorded version of an in-person training I have been doing. Enjoy. I end up coming back to this myself even for reference.
This episode is all about performing data manipulation to derive raw insights from your data using the R programming language. Data manipulation is the core to anything and everything you do in business intelligence and machine learning. This episode sets the base for all R based intelligence sessions from here on out.
This article is a video tutorial on introduction to the very bare basics of R. Its a bit dry, but it is the underlying components of everything covered in the interesting stuff. Can’t do cool stuff without understanding the basics first.
Ever wonder the difference between R and Microsoft R? Considering learning R as a programming language? You should probably watch this video. It is the first in a 4 part series to give you the jump start you need to becoming a professional data scientist with R.
These days I need to make videos instead of written articles, so I am going to post a few of those here.
In this video we will do an initial exploratory analysis on a water flow data set that came from a prototype that I built. The prototype consists of a water pump, a valve and a flow meter. The data set exists in SQL Azure. We will use R and R Studio to perform the analysis from an Azure virtual machine.
Many folks may know that the South Florida Evangelism team is undertaking a task that many think is impossible. Well, in that statement all I hear is “there is still a chance!” The end goal is to create a teddy bear that can have a conversation about anything. So step one is to collect as much dialogue as possible from as many sources as possible and annotate them. What better place to power an association engine for word and phrase relevance than something that forces you down to 140 characters to get your message across.
So as any normal developer I decided to start by looking for samples already out there. MSDN has a great starter for writing tweets and doing sentiment analysis with HBase and C#. The only issue with the sample is, that it is very poorly written and difficult to understand with no separation of concerns. So I want to go through simplifying the solution and separating a few concerns out.
If you have ever done mapping applications, you may have encountered needing to do this. It takes a lot of looking around the internet to finally find the right equation etc. For our application, we need to do this for google maps, as it does not take a latitude/longitude combination like bing maps. If you choose to support ONLY bing maps, your job is easy, as here is the format: http://www.bing.com/maps/default.aspx?q=LATITUDE%2c+LONGITUDE (include negative signs if necessary). However Google maps requires more work (UGH!) https://www.google.com/maps/place/LATITUDE <directional> LONGITUDE <directional>, ZOOM (with various encoded separators).
This article goes through the code that converts latitude longitude like you will pull from a phone’s gps into the DMS format needed by google maps. Again, you don’t even need to bother with this conversion if you choose to use bing maps, it is simply as stated above.
So I had a life changing event this past Sunday at 8:55am 5/24/2015. My first child was born! Both child and wife are healthy and happy. Everything is good in life. Like many couples though, my wife and I struggled to find the right name for our child. We didn’t want something too common, or was an old person name, or so rare and funky that nobody could spell it. We also realized we just had a general lack in knowing what names were out there. So after much debate and discussion over what to name her, I started doing a bit of an analysis using some census data. I want to thank Jamie Dixon for providing the data that he found for use in his Dinner Nerds article. The data itself can be found here. This article will discuss the code used to go through all of the data and provide insights into child names.