This article we will do a light touch on Cosmos DB; specifically the Mongo API from Cosmos DB and using that API from Mongo Engine. I think one of the great things about Cosmos DB’s Mongo API is that I simply swap out my connection strings and guess what; it works! This means not only can I use Mongo Engine, but I can use PyMongo or any other framework for any language that connects to Mongo.
So Jupyter is a great tool for experimental science. Running a jupyter notebook though can be tricky; especially if you want to maintain all of the data that is stored in it. I have seen many strategies; but I have come up with one that I like best of all. It is based on my “Micro Services for Data Science” strategy. By using decoupled data and compute we can literally thrash our Jupyter notebook and all of our data and notebooks still live. So why not put it in a self healing orchestrater and deploy via Kubernetes :D.
This article we will do a bit of a review of the technology stack required to enable this as well as the logistics behind setting it all up and operating against it. The solution uses Azure Container Services, Docker, Kubernetes, Azure Storage, Jupyter, Tensor Flow and Tensorboard. WOW! That is a lot of technology; so we won’t do a deep dive how to but rather some pointers on how to get provisioned and then the high level process on how to use it.
I’m not sure the title really nailed it well enough, but we are going to talk about solving VERY big problems as fast as we possibly can using highly sophisticated techniques. This blog article is really a high level overview of what you want to set up as opposed necessarily to the usual how to set it up. There are a ton of steps to the actual how to; I thought it best to just provide an overview in this article to what you want to do instead of how to do it.
So today we are going to do something really awesome. Operationalize Keras with Azure Machine Learning. Why in the world would we want to do this? Well we can configure Deep Neural Nets and train them on GPU. In fact, in this article, we will train a 2 depth neural network which outputs a linear prediction of energy efficiency of buildings and then operationalize that GPU trained network on Azure Machine Learning for production API usage.
So I just completed an incredible project with Brain Thermal Tunnel Genix, where I learned so much about pattern recognition, machine learning and taking research and algorithms and pushing those into a production environment where it can be integrated into a real product. Today’s article takes those lessons and provides a sample on how to perform complex modelling and operationalize it in the cloud. The accompanying Gallery Example can be found here.
So this blog post is to get you operational with Docker, image and volume management with a pivot towards scientific computing and tensor flow. So I am working on building a Jupyter Notebook for the local mahcine learning meetup to learn the ins and outs of Tensor Flow and deploy this thing up to Azure. Part of getting this to work is not only managing the Docker Containers, but also the data on the volumes so when we deploy up to Azure and somebody opens up the notebook it comes pre-loaded with all the necessary tutorial data.
This article is a high level discussion on where you might use various Microsoft Technologies in the field of robotics. I will begin with a side pet project I’m kicking off to get more familiar with some cool tools and tech I’ve lately discovered so I can hopefully get assigned to some really cool projects at work, including drones.
For this portion of the HoL, we will be bypassing the Raspberry Pi portion all together and go straight to provisioning Resources and using a local app to simulate the telemetry data. You can alternatively still use a Raspberry Pi as this session was intended, but for time purposes, that can be skipped and the app below can be used for simulating data for a live dashboard. Continue reading →