Saturday 27 December 2014

Leadership

You have awesome engineers, and they want to advance in their career. Their team is growing because of advancements they've made, and you want to recognize the work they've done with something. The obvious answer is to put them in charge of the team they've built, especially as they're the de-facto leader of the team already. But is this what they want? Or just what they believe they're supposed to want? 

People Management Is A Different Skill

It's well known in the engineering world that engineers often will reach a technical peak, only to be asked to learn an entirely new set of skills, revolving around social and soft skills that they've spent the past chunk of their career probably ignoring. Building these skills involves a lot of trial and error, and above all, most of their time. Their time is now spent not coding, which is what they were originally rewarded for. Suddenly, they go from being good at their job to bad at it, which destroys confidence and job satisfaction. 
The problem here is that people management is a different skill than technical leadership. What you want is to reward someone with recognition that they are a thought leader, an exceptional performer- to make them an example of what others should strive to be. Everyone can't strive to be a manager, who would do the work? Additionally, you don't want to send the message that management, as a career, is somehow "better" than coding or anything else in the company, for that matter. 

What Is Technical Leadership?

Technical Leadership is composed of several aspects of purview over parts of job responsibility. Think about the day of a typical engineer- many decisions have to be made, problems have to be prioritized, solutions need to be found. Those are the fun parts of any engineer's job, and indeed all engineers must exercise some degree of technical leadership.
The rest of the job involves answering questions about how a thing you made works or doesn't work. Bugs need to be found and fixed. Documentation has to be written. Code has to be reviewed. Estimations must be made. Most of all, the longer an engineer remains at a company, the harder it is to find a few hours of isolation, free from interruption, to get work done. These parts, they are the worst parts of the job.
All of the fun work happens in these uninterrupted times, and these uninterrupted times happen only when your disappearance doesn't materially slow down someone else's work. Innovation happens when you have the bandwidth to hold the entire problem in your head- most of the time that takes a great deal of "load time" - research and contemplation about the problem. For introverts this means quiet, for extroverts this means dedicated time in a room of other people you work well with, thinking about the problem. 
So what gets in the way? Why can't exceptional engineers have more fun work, and have someone who is yet to be recognized do more of the less fun things? The key is in empowerment, and an engineer's ability to say "No". This isn't a function of courage, it's a function of empowerment - most exceptional engineers can't just let problems go unsolved, and if they feel responsible for a product or service, they end up chained to the project forever. Engineers often feel responsible for a project for the rest of its life, and this is exacerbated by the tendency for information to "silo" in one person, rather than distributing across a team.

How To Empower Your Technical Leaders

As a project grows, a technical leader might naturally accrue team members, and accidentally transform into a people manager. This is the biggest risk. The way to empower your technical leaders is to set expectations early and often, encouraging them to be clear about their ultimate goals, without sending the message that they will be less respected if they don't transition into people management. If their goal is to be the best programmer imaginable, or to create a system that scales to 10 million users, or to understand deep processes in the operating system, then finding ways to help them reach their goals is rewarding for everyone. 
Technical goals are easy to meet - there will be plenty of opportunities as a company grows to reward engineers with "fun" projects, and the ability to learn and grow as an engineer.Identify when someone will have to learn a lot for a role, these roles are rewarding and coveted. Also find opportunities to allow for professional development outside of your company, such as encouraging engineers to go to conferences and give talks, and become thought leaders in their field. Most engineers don't have an actionable plan for their own professional development, and so can benefit from advice on how to celebrate their accomplishments by putting them in a favorable position. This also will help you make the case for raises and bonuses, and will increase the esteem of your engineering department in general, which in turn attracts more talent. Seeing someone speak on something that interests them is the number one reason why mid-to-senior level engineers seek employment in an organization, rather than being recruited. Developing a strategy for positioning your technical thought leaders as the technical thought leaders is empowering, and it helps them empower themselves. 
The ability to say "No" is the second component of a technical leader - develop a strategy around handoffs. The objective here is to reward innovators by not chaining them to their innovations forever - it encourages those who create to continue creating. Junior developers are perfect to hand off smaller creations to, it helps them develop a sense of ownership without having to understand deeply the best practices that created the idea. Handoffs create opportunities in both directions- both for the person with more time, as well as the person accepting the responsibility. The incentives align well, but make sure there are expectations for the person receiving the responsibility- be honest about what it means for their career, and set up a strategy around them getting help with the project.

Wednesday 3 December 2014

Introducing Data Center OS

Developers today are building a new class of applications. These applications no longer fit on a single server, but instead run across a fleet of servers in a data center. Examples include analytics frameworks like Apache Hadoop and Apache Spark, message brokers like Apache Kafka, key-value stores like Apache Cassandra, as well as customer-facing applications such as those run by Twitter and Netflix.
These new applications are more than applications, they are distributed systems. Just as it became commonplace for developers to build multithreaded applications for single machines, it’s now becoming commonplace for developers to build distributed systems for data centers.
But it’s difficult for developers to build distributed systems, and it’s difficult for operators to run distributed systems. Why? Because we expose the wrong level of abstraction to both developers and operators: machines.

Machines are the wrong abstraction

Machines are the wrong level of abstraction for building and running distributed applications. Exposing machines as the abstraction to developers unnecessarily complicates the engineering, causing developers to build software constrained by machine-specific characteristics, like IP addresses and local storage. This makes moving and resizing applications difficult if not impossible, forcing maintenance in data centers to be a highly involved and painful procedure.
With machines as the abstraction, operators deploy applications in anticipation of machine loss, usually by taking the easiest and most conservative approach of deploying one application per machine. This almost always means machines go underutilized since we rarely buy our machines (virtual or physical) to exactly fit our applications, or size our applications to exactly fit our machines.
It’s time we created the POSIX for distributed computing: a portable API for distributed systems running in a data center or on a cloud.By running only one application per machine, we end up dividing our data center into highly static, highly inflexible partitions of machines, one for each distributed application. We end up with a partition that runs analytics, another that runs the databases, another that runs the web servers, another that runs the message queues, and so on. And the number of partitions is only bound to increase as companies replace monolithic architectures with service-oriented architectures and build more software based on microservices.
What happens when a machine dies in one of these static partitions? Let’s hope we over-provisioned sufficiently (wasting money), or can re-provision another machine quickly (wasting effort). What about when the web traffic dips to its daily low? With static partitions we allocate for peak capacity, which means when traffic is at its lowest, all of that excess capacity is wasted. This is why a typical data center runs at only 8-15% efficiency. And don’t be fooled just because you’re running in the cloud: you’re still being charged for the resources your application is not using on each virtual machine (someone is benefiting — it’s just your cloud provider, not you).
And finally, with machines as the abstraction, organizations must employ armies of people to manually configure and maintain each individual application on each individual machine. People become the bottleneck for trying to run new applications, even when there are ample resources already provisioned that are not being utilized.

If my laptop were a data center

Imagine if we ran applications on our laptops the same way we run applications in our data centers. Each time we launched a web browser or text editor, we’d have to specify which CPU to use, which memory modules are addressable, which caches are available, and so on. Thankfully, our laptops have an operating system that abstracts us away from the complexities of manual resource management.
In fact, we have operating systems for our workstations, servers, mainframes, supercomputers, and mobile devices, each optimized for their unique capabilities and form factors.
We’ve already started treating the data center itself as one massive warehouse-scale computer. Yet, we still don’t have an operating system that abstracts and manages the hardware resources in the data center just like an operating system does on our laptops.

It’s time for the data center OS

What would an operating system for the data center look like?
From an operator’s perspective it would span all of the machines in a data center (or cloud) and aggregate them into one giant pool of resources on which applications would be run. You would no longer configure specific machines for specific applications; all applications would be capable of running on any available resources from any machine, even if there are other applications already running on those machines.
From a developer’s perspective, the data center operating system would act as an intermediary between applications and machines, providing common primitives to facilitate and simplify building distributed applications.
The data center operating system would not need to replace Linux or any other host operating systems we use in our data centers today. The data center operating system would provide a software stack on top of the host operating system. Continuing to use the host operating system to provide standard execution environments is critical to immediately supporting existing applications.
The data center operating system would provide functionality for the data center that is analogous to what a host operating system provides on a single machine today: namely, resource management and process isolation. Just like with a host operating system, a data center operating system would enable multiple users to execute multiple applications (made up of multiple processes) concurrently, across a shared collection of resources, with explicit isolation between those applications.

An API for the data center

Perhaps the defining characteristic of a data center operating system is that it provides a software interface for building distributed applications. Analogous to the system call interface for a host operating system, the data center operating system API would enable distributed applications to allocate and deallocate resources, launch, monitor, and destroy processes, and more. The API would provide primitives that implement common functionality that all distributed systems need. Thus, developers would no longer need to independently re-implement fundamental distributed systems primitives (and inevitably, independently suffer from the same bugs and performance issues).
Centralizing common functionality within the API primitives would enable developers to build new distributed applications more easily, more safely, and more quickly. This is reminiscent of when virtual memory was added to host operating systems. In fact, one of the virtual memory pioneers wrote that “it was pretty obvious to the designers of operating systems in the early 1960s that automatic storage allocation could significantly simplify programming.”

Example primitives

Two primitives specific to a data center operating system that would immediately simplify building distributed applications are service discovery and coordination. Unlike on a single host where very few applications need to discover other applications running on the same host, discovery is the norm for distributed applications. Likewise, most distributed applications achieve high availability and fault tolerance through some means of coordination and/or consensus, which is notoriously hard to implement correctly and efficiently.
With a data center operating system, a software interface replaces the human interface.Developers today are forced to pick between existing tools for service discovery and coordination, such as Apache ZooKeeper andCoreOS’ etcd. This forces organizations to deploy multiple tools for different applications, significantly increasing operational complexity and maintainability.
Having the data center operating system provide primitives for discovery and coordination not only simplifies development, it also enables application portability. Organizations can change the underlying implementations without rewriting the applications, much like you can choose between different filesystem implementations on a host operating system today.

A new way to deploy applications

With a data center operating system, a software interface replaces the human interface that developers typically interact with when trying to deploy their applications today; rather than a developer asking a person to provision and configure machines to run their applications, developers launch their applications using the data center operating system (e.g., via a CLI or GUI), and the application executes using the data center operating system’s API.
This supports a clean separation of concerns between operators and users: operators specify the amount of resources allocatable to each user, and users launch whatever applications they want, using whatever resources are available to them. Because an operator now specifies how much of any type of resource is available, but not which specific resource, a data center operating system, and the distributed applications running on top, can be more intelligent about which resources to use in order to execute more efficiently and better handle failures. Because most distributed applications have complex scheduling requirements (think Apache Hadoop) and specific needs for failure recovery (think of a database), empowering software to make decisions instead of humans is critical for operating efficiently at data-center scale.

The “cloud” is not an operating system

Why do we need a new operating system? Didn’t Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) already solve these problems?
IaaS doesn’t solve our problems because it’s still focused on machines. It isn’t designed with a software interface intended for applications to use in order to execute. IaaS is designed for humans to consume, in order to provision virtual machines that other humans can use to deploy applications; IaaS turns machines into more (virtual) machines, but does not provide any primitives that make it easier for a developer to build distributed applications on top of those machines.
PaaS, on the other hand, abstracts away the machines, but is still designed first and foremost to be consumed by a human. Many PaaS solutions do include numerous tangential services and integrations that make building a distributed application easier, but not in a way that’s portable across other PaaS solutions.

Apache Mesos: The distributed systems kernel

Distributed computing is now the norm, not the exception, and we need a data center operating system that delivers a layer of abstraction and a portable API for distributed applications. Not having one is hindering our industry. Developers should be able to build distributed applications without having to reimplement common functionality. Distributed applications built in one organization should be capable of being run in another organization easily.
Existing cloud computing solutions and APIs are not sufficient. Moreover, the data center operating system API must be built, like Linux, in an open and collaborative manner. Proprietary APIs force lock-in, deterring a healthy and innovative ecosystem from growing. It’s time we created the POSIX for distributed computing: a portable API for distributed systems running in a data center or on a cloud.
The open source Apache Mesos project, of which I am one of the co-creators and the project chair, is a step in that direction. Apache Mesos aims to be a distributed systems kernel that provides a portable API upon which distributed applications can be built and run.
Many popular distributed systems have already been built directly on top of Mesos, including Apache SparkApache AuroraAirbnb’s Chronos, and Mesosphere’s Marathon. Other popular distributed systems have been ported to run on top of Mesos, including Apache HadoopApache Storm, and Google’s Kubernetes, to list a few.
Chronos is a compelling example of the value of building on top of Mesos. Chronos, a distributed system that provides highly available and fault-tolerant cron, was built on top of Mesos in only a few thousand lines of code and without having to do any explicit socket programming for network communication.
Companies like Twitter and Airbnb are already using Mesos to help run their datacenters, while companies like Google have been using in-house solutions they built almost a decade ago. In fact, just like Google’s MapReduce spurred an industry around Apache Hadoop, Google’s in-house datacenter solutions have had close ties with the evolution of Mesos.
While not a complete data center operating system, Mesos, along with some of the distributed applications running on top, provide some of the essential building blocks from which a full data center operating system can be built: the kernel (Mesos), a distributed init.d (Marathon/Aurora), cron (Chronos), and more.

Sunday 30 November 2014

Big data & Traders


Bankers beware – big data are watching you. A recent swath of trading scandals has spurred big banks to turn to new technology and data sources as they attempt to crackdown on illegal behaviour by their staff.



Financial institutions are increasingly moving beyond employing traditional compliance systems, which have focused on monitoring electronic communications and transaction prices, and using state of the art surveillance software as they seek to stay one step ahead of wily bankers and traders.

The drive to incorporate behavioural science and new data sources comes after analysis of electronic messages has helped regulators ensnare bankers accused of rigging interbank lending rates and, most recently, foreign exchange rates. That behaviour has led to a swath of multibillion-dollar fines and settlements.

Front Office

The worry now is that bank employees will go underground to engage in illicit behaviour, prompting an internal race as compliance officers seek to root out malfeasance by “front office” staff.
“The days of traders saying something really dumb, which then gets picked up by a filter are largely gone,” said Michael O’Brien, global head of SMARTS Broker, Nasdaq’s market surveillance business.Banks have long worried that staff may turn to personal cell phones or social media platforms such as Snapchat or Facebook, to avoid having their work communications monitored by compliance systems.

To combat that risk, some compliance departments are benchmarking staff’s performance against average use of internal communications – trying to detect discrepancies between their profitability and the number of messages sent to clients.
“If an average trader’s been hitting the same benchmark or better than peers on their desk but the volume of messages they’re sending on internal messages is significantly less, that’s a red flag,” says Varun Mehta, vice-president of legal and compliance solutions at Clutch Group. “Are they using a burner phone or something else?”
The holy grail is marrying up the communication surveillance with trading surveillance. If someone is really determined to do something illegal it makes it that much more difficult to detect
- US bank lawyer

Surveillance

To combat the use of personal phones, banks are also restricting mobile phone usage on trading floors to certain frequencies that can be monitored. Key card data and human resources information may also be examined to ensure bankers and traders are not taking too many “smoke breaks”,

The increasingly high-tech surveillance of bankers and traders has raised legal issues with financial institutions, particularly in Europe where banks may struggle to reconcile increasingly heavy compliance requirements with local privacy laws.

Communications analysis – alongside transaction analysis, still the predominate form of compliance monitoring – is becoming more sophisticated too, with algorithms and artificial intelligence being used to identify patterns of speech and networks of contacts as opposed to merely catching keywords.
Bill Nosal, head of product development at SMARTS Broker, said traders were unlikely to use obvious statements such as “it’s time to set up our insider trading ring or manipulate a market. That’s why it’s sometimes better to do the linkages of communications and tie them to abnormal trading”.

My assignment this week

Recently I spent some time in a foreign exchange dealing room in London.

The key, was to bring senior traders or former bankers into the compliance function to help identify strategies or unusual patterns that can be worked into the system. The holy grail for the client is marrying up the communication surveillance with trading surveillance. If someone is really determined to do something illegal it makes it that much more difficult to detect.

Natural Language analysis

Fonetic, a Spanish software company that analyses audio and written communications, launched in the US this year. Its software – utilised by clients such as BBVA and Santander – uses real-time phrase recognition technology, as opposed to analysing transcripts, and is capable of flagging related words and phrases to broaden its search capabilities. For example, if a bank is trying to identify traders talking about bananas, it could use the software to search not only “banana” but also “yellow,” “long”, “fruit,” and “monkey”.

The message to banking trade desks is clear: banks are shining an increasingly bright spotlight on all corners of their business.

Friday 28 November 2014

Virtual Machines… The Holy Grail of Local Web Development

Like so many other developers I started out working on plain HTML sites on my local computer and then using FTP to send them to a remote server where the world could get to them. That worked great until dynamic sites came around and I could no longer test my code locally resulting in a, well, less than perfect workflow. You see, once I start getting into dynamic sites (I started with Cold Fusion and ASP classic) I developed a habit of developing directly on a remote server, usually the production site, with tools like Dreamweaver that allowed me to connect and work directly on the remote machine. It was in fact this workflow that kept me a loyal Dreamweaver user for the better part of a decade as nothing else at the time could compete with this type of workflow very well.
The next step in my development evolution was a local server. When I was still on Windows I would use XAMPP or even IIS and later when I moved to Mac I discovered MAMP Pro and finally AMPPS. For the first time I had a true Apache, MySQL, PHP stack on my local computer that, at least for major versions, could mimic the bulk of my server environments with only minimal modifications. It worked well but it wasn’t perfect. Switching computers could be a nightmare and should anything in the stack become corrupt I was in for some serious trouble. But at least I wasn’t working on the remote server anymore. This workflow was good, but just not good enough.
So for the last 5 years I’ve played around with various combinations of MAMP Pro, AMPPS,Homebrew, XAMPP and a few other solutions in search of the perfect server that would mimic my production development almost perfectly, be easily portable and would require a bare minimum amount of maintenance.
Throughout this time, and in parallel to my quest for the perfect development environment, I have been a heavy user of another now popular technology, virtual machines, which I use to test sites on Windows, Ubuntu and other platforms where I could replicate neither the browser nor the operating system very well using my Mac. While this technology was rather cumbersome 5 years ago when I started using it, today it has matured to the point where you can use a virtual machine as easily as you can use a word processor and with about the same performance level you would expect from the host OS.
While I’ve been developing locally and using virtual machines for quite some team I had never been able to successfully combine the two goals. Sure I had heard of Puppet, Chef and Vagrant but they seemed to me to be anything but mature and far more of a hassle than getting a proper MAMP or AMPPS environment.
Finally, last weekend Mark Jaquith changed my opinion on all of this with his WordCamp San Francisco talk titled Confident Commits, Delightful Deploys. He pointed out just how mature Vagrant and Puppet had become and how easily they could be utilized to build a local development environment utilizing the same OS and packages I use in production (Ubuntu, Apache, NGINX, etc) and requiring only minutes to setup or tear down once the initial configuration was complete.
The Holy Grail of development environments has been found and it doesn’t require a whole heck of a lot get started.
  1. Install Virtualbox and its Guest Additions. This is the virtualization engine that will allow your new development environment to run.
  2. Install Vagrant. It doesn’t matter what your OS is, this is a free and easy download package that serves as a wrapper around the Virtual machine in Virtualbox. Once configured it will download the image you need, set up the virtual machine and pass it off to a provisioning script to make sure everything you need is installed and configured.
  3. Get a base configuration. This is easy with sites like https://puphpet.com/ which allow you to configure what you need for a basic development environment. I started with it and then customized it to meet my needs. Mostly I just changed a few variables to make sure it worked for me a little easier out of the box. You can find my modified configuration on GitHub if you’re interested.
  4. Start working. Once you have a script getting a development environment is as easy as going to the location of the script you downloaded and typing vagrant up in the terminal.

Why Use Virtual Machines?

Yeah, it’s a bit of work to set this all up but once you do there are some serious benefits to this workflow.
  1. If you’re in a team everyone will be developing on the same environment. We’re not there at my day job yet but we’re getting there. Once implemented it won’t matter if folks are on Windows, Mac or Linux. The development environment will be the same throughout allowing us to spend far less time as a team debugging.
  2. Your development environment can match your production environment. If you’re deploying to a given environment why not work in that same environment? Again this saves time and sanity in debugging as you’re no longer changing configurations as you push your project up the line.
  3. Speed and efficiency. Once you have Vagrant, an install script, and Virtualbox you can switch machines, run up or down new environments and generally get to coding a lot faster without having to spend hours tweaking your local setup to match your servers as closely as possible. For me this is the real benefit at the moment. I can work on any machine I happen to have handy and I no longer have to worry about either setup or whether I’ve configured the new machine to match the old. It just works.

Room to Improve

Of course, as I’ve only been using this for a couple of days there is still room for improvement in my setup. In particular I need to work on both my Puppet script and Vagrant box to take into account running PHP over fastcgi as well as a couple of other changes. It’s darn close and I’m sure with a little more time I’ll have it pretty much perfect. Fortunately bringing new environments up and down is so easy that I really have nothing to lose with tweaking it out.

Give it a try

One of the best things about trying a new method like this is you don’t have to give up the way you currently work. I admit I did delete AMPSS from my computer about a day after starting with Vagrant and Puppet as I simply don’t need it anymore but there is no reason you have to. There is nothing to conflict with so your old setup so the only thing you have to lose is a few minutes of your time and an obsolete way of dealing with local development.

Wednesday 19 November 2014

Enterprise Security

Microsoft has acquired Aorato, an innovator in enterprise security. My team are using this acquisition to give customers a new level of protection against threats through better visibility into their identity infrastructure.  

FeaturedImage_2014-11-18_Aorata_6477472927_f8624d5cf1_b

With Aorato we will accelerate our ability to give customers powerful identity and access solutions that span on-premises and the cloud, which is central to our overall hybrid cloud strategy.

We all know corporate security is more important than ever. Nearly every day there are more headlines about breaches, fraud and data loss. Unfortunately, compromised passwords, stolen identities and network intrusion are a fact of life. Companies need new, intelligent solutions to help them adapt and defend themselves inside the network, not just at its edge.

Aorato’s sophisticated technology uses machine learning to detect suspicious activity on a company’s network. It understands what normal behavior is and then identifies anomalies, so a company can quickly see suspicious behavior and take appropriate measures to help protect itself. Key to Aorato’s approach is the Organizational Security Graph, a living, continuously-updated view of all of the people and machines accessing an organization’s Windows Server Active Directory (AD).

AD is used by most enterprises to store user identities and administer access to critical business applications and systems. Therefore, most of our enterprise customers should be able to easily take advantage of Aorato’s technology. This will complement similar capabilities that we have developed for Azure Active Directory, our cloud-based identity and access management solution.

We are excited about the technology that Aorato has built and, especially, the people joining the Microsoft team through this acquisition.

In the mobile first, cloud first era, Microsoft is committed to moving nimbly and aggressively to provide customers with solutions to their top challenges.

Friday 14 November 2014

.Net Goes Open Source

Here's my rollup and take on the situation.
  • Microsoft are serious about open source and cross platform.
    • .NET Core 5 is the modern, componentized framework that ships via NuGet. That means you can ship a private version of the .NET Core Framework with your app. Other apps' versions can't change your app's behavior.
    • They are building a .NET Core CLR for Windows, Mac and Linux and it will be both open source and it will be supported by Microsoft. It'll all happen at https://github.com/dotnet.
    • They are open sourcing the RyuJit and the .NET GC and making them both cross-platform.
  • ASP.NET 5 will work everywhere.
    • ASP.NET 5 will be available for Windows, Mac, and Linux. Mac and Linux support will come soon and it's all going to happen in the open on GitHub at https://github.com/aspnet.
    • ASP.NET 5 will include a web server for Mac and Linux called kestrel built on libuv. It's similar to the one that comes with node, and you could front it with Nginx for production, for example.
  • Developers should have a great experience.
    • There is a new FREE SKU for Visual Studio for open source developers and students called Visual Studio CommunityIt supports extensions and lots more all in one download. This is not Express. This is basically Pro.
    • Visual Studio 2015 and ASP.NET 5 will support gulp, grunt, bower and npm for front end developers.
    • A community team (including myself and Sayed from the ASP.NET and web tools team have created theOmniSharp organization along with the Kulture build system as a way to bring real Intellisense to Sublime, Atom, Brackets, Vim, and Emacs on Windows, Linux, and Mac. Check out http://www.omnisharp.net as well as blog posts by team members Jonathan Channon
  • Even more open source.
    • Much of the .NET Core Framework 4.6 and its Reference Source source is going on GitHub. It's being relicensed under the MIT license, so Mono (and you!) can use that source code in their .NET implementations.
    • There's a new hub for Microsoft open source that is hosted GitHub at http://microsoft.github.io.
Open sourcing .NET makes good sense. It makes good business sense, good community sense, and today everyone at Microsoft see this like we do.

Wednesday 12 November 2014

Android on Visual Studio

Microsoft released Visual Studio 2015 Preview this week and with it you now have options for Android development. When choosing one of those Android development options, Visual Studio will also install the brand new Visual Studio Emulator for Android to use as a target for debugging your app. 
clip_image001

Before I walk you through using this new emulator, let’s talk about why we are building an emulator for Android – feel free to skip the next section to go to the interesting part :-)

The need for an emulator for Android

We know that emulators can play a key part in the edit-compile-debug cycle (bigger part than devices) and we believe that you need an emulator like the one we are releasing today.
Having a great emulator to debug against doesn’t mean you don’t need a device, and having a device to debug against doesn’t mean you won’t benefit from a good emulator. They are complementary.
You definitely need to test against a device for the following scenarios which are unsuitable for any emulator:
  1. Measuring the performance characteristics of your code. While an emulator can help you with correctness issues, it will never perfectly emulate the performance characteristics of your code running on the actual devices that you want to test against. You want to measure the performance as your users see it.
  2. Testing hardware-specific issues. If what you are trying to test is the touch-responsiveness of your game, or the speaker quality for your media app, you will want to do that type of testing on the target devices. Ditto if you are trying to work around an OEM-specific bug.
  3. Purely evaluating the actual user experience in real-world situations, e.g. do your designed interactions work for a user walking around using your app one handed with just their thumb alone?
For all other testing, which as part of your edit-compile-debug cycle normally takes at least 80% of your time, you’d want to use an emulator (barring other blocking issues or limitations with your emulator of choice). Use an emulator for the following reasons:
  1. The majority of your testing is for correctness issues (not performance) and the majority of your code is probably not dealing with hardware specific issues. So use an emulator!
  2. You don’t want to spend a bunch of money buying a bunch of devices (and keep doing so every time a new device appears on the market), just to test things like screen resolution, DPI settings for different screen sizes, different API levels / platform versions, when you can configure that in software (in an emulator).
  3. You don’t want to have to take physical action with your device to test some sensor, e.g. respond to movement or location changes or simulating network/battery changes. Instead you want to simulate the sensor values easily and quickly in an emulator, e.g. simulate a trip to another town while your app responds to the change of location.
  4. There is also the convenience element. Connecting to a device (typically dealing with cables), managing that connection and its lifetime, using one of your USB ports, is not as simple as launching the emulator and treating it like every other desktop application running on your dev machine.
So emulators are great and can be a key part in the edit-compile-debug cycle and we want to make sure that our emulator is best-in-class. You have told us about several pain points with existing emulators that we are starting to address with our release:
  • Slow. This is the number one complaint we’ve heard from Android developers. “The emulator is painfully slow, it hurts my productivity, and I’ll use a device.” Slow is not acceptable. If anything, using the emulator should be faster than using a device so you can test your scenarios faster (remember, you are not using emulators to test the performance of our code, you just need them to be as fast as possible for your own use).
  • Conflict with Hyper-V on Windows. Many emulators require you to disable Hyper-V or don’t work as well with Hyper-V as they do without. Using Hyper-V is part of the development setup for many developer activities, so asking you to restart your machine (multiple times a day) to toggle Hyper-V is not acceptable.
  • o One specialized variant of this is using the Windows Phone emulator (which itself is based on Hyper-V). It is a real pain having to make changes and reboot every time you want to switch from an Android emulator to a Windows Phone emulator to test your cross-platform code.
  • Additional acquisition and installation step. If your main development environment is Visual Studio, you don’t want to have to acquire the emulator separately and follow a separate installation process.
  • Separate cost. Having a great emulator that can cost you as much as your main development environment is not an option for most. The Visual Studio Emulator for Android comes with VS without additional charge.
In short, we will address all of those pain points with the Visual Studio Emulator for Android. Now, let’s recap Visual Studio’s debugging story for Android and how to choose the VS Emulator for Android.

Debugging against the Visual Studio Emulator for Android

With Visual Studio 2015 Preview you can target Android and edit-compile-debug regardless of your choice of programming models: JavaScript (or TypeScript) with Cordova, C++, or C# with Xamarin.
With all three of those choices, when you start debugging, you must first choose a target. That target can be a device, or it can be one of many emulators that you may have running on your machine. Let’s see how to choose a debug target for Cordova and C++ in Visual Studio 2015 Preview, and for Xamarin in Visual Studio 2013.
With C++ projects, the Debug Target menu looks like this:
clip_image003
With Cordova projects you will want to pick the last two entries in the Debug Target menu as per the following screenshot:
clip_image005(definitely avoid picking the option “Android Emulator” as that is the slow one that comes with the SDK)
With Xamarin projects, the option looks like this:
clip_image006
For best results with Xamarin projects, disable/uncheck "Use Fast Deployment" under the Android Options under the Xamarin Project Properties.
Note: If you want to use the VS Emulator for Android from a different IDE, as a temporary workaround you can always launch our emulator from Visual Studio using one of the options above, then close that project and leave the emulator running and available for your other IDE to target (over ADB).
Once you have chosen your debug target and hit F5, your app will be deployed to the emulator, as per the regular VS debugging flow you can hit breakpoints in your code, see the call stack, inspect variables, etc. So now that you know how to use the emulator for debugging, let’s continue exploring its cool features!

Sensor simulations and other capabilities of the Visual Studio Emulator for Android

Beyond using the emulator as a deployment target, you can also take advantage of sensor simulation and other capabilities – let’s examine a few of those, in no particular order.
clip_image008

Zoom

You can change the size of the emulator as you see it on your development machine (the host). The dots per inch (DPI) for the emulator is based on the host monitor DPI, regardless of the zoom value. This allows you to scale the emulator in case it is taking too much space on your desktop.
To change the size, use the “Zoom” button on the emulator’s vertical toolbar.
You can also use the “Fit to Screen” button above the “Zoom” button to fit the emulator on your screen.
If you are going to take screenshots of your app running in the emulator (e.g. with the Snipping tool) for best results remember to set the zoom level to the maximum of 100% - or even better, use our built-in Screenshot tool support that I describe below.

Orientation / Rotation

Unless your app only supports a fixed orientation, you should test how your app responds to orientation changes, and what it looks like in portrait, left-landscape, and right-landscape orientations. Simply rotate the emulator left or right with the two corresponding buttons on the vertical toolbar: “Rotate Left” and “Rotate Right”. The size of the emulator remains the same when you rotate.

Network Info

The emulator reuses the network connection of the host machine, so there is nothing for you to configure.
You can also review the emulator’s current network settings. On the vertical toolbar click on the “Tools” button to show the “Additional Tools” fly out panel, and then click on the “Network” tab.
clip_image010

Location (GPS)

If your app does anything with navigation, geofencing, walking/biking/driving, then you will love the location and driving simulation in the emulator under the “Location” tab when you open the “Additional Tools”.
clip_image012
You can navigate the map by dragging it around, by zooming/in and out, or even by searching for a location. You can place and remove pins on the map, thus creating map points. Those appear as latitude longitude coordinates in the list in the bottom left. From the toolbar at the top you can even save those map points to an XML file and later load them from the file.
Instead of having each map point immediately change the GPS location of the emulator (“Live” mode), you have other options too! You may want to place a few map points and then simulate transitioning between those points. To do that, at the toolbar at the top switch from “Live” mode to “Pin” mode. Then you can press the small play button at the end of the toolbar to transition between the map points. You can even enter a transition interval (in seconds).
Finally, you can choose a third mode that is similar to “Pin”, which is called “Route” mode. In this mode you can also simulate transitions between the points but with some additional twists. The simulator will calculate an actual path between the points and generate invisible points at 1 second intervals between the points you choose. The overall speed at which it will play those points is determined by a second setting and your options are: “Walking” (5 kilometers per hour), “Biking” (25 km/h), “Speed Limit” (variable dependent on map point), and “Fast”.

Accelerometer

If your app tracks and responds to movement of the phone, you can test them using the “Accelerometer” tab when you open the “Additional Tools”.
clip_image014
Simply click and hold the red dot in the middle and drag it towards the directions you want to simulate, within the 3D plane. As you do that your app will receive movement events if it has registered for them.
You can also see the X, Y, Z values in the bottom left. Under those values you can “Reset” to the starting position, and also pick the starting Orientation from these values: Portrait Standing, Landscape Standing, Portrait Flat, and Landscape Flat.
Lastly you can simulate the phone shaking by clicking the “Play” button in the bottom right. The only visual indication that a shake is taking place are the values of the X,Y,Z and when they stop rapidly changing you’ll know the shake is over.

Power/Battery Simulation (and Power button)

If you write your app to respond to battery charge changes, then you will like the emulator’s ability to simulate that by switching to the “Battery” tab when you open the “Additional Tools”.
There is a slider that allows you to set the exact charge value of the battery. Notice as you slide down/up how the battery icon in the top right changes to reflect the change. Your app can also respond accordingly.
If you change the Battery Charging State to not be “Charging”, then the emulator’s screen will go blank after a timeout period. You can configure the timeout though the built-in regular "Settings" app (look for the “Sleep” option under “Display”). If the emulator sleeps due to this, then you can wake it up through the “Power” button on the vertical toolbar.
clip_image016

Screenshot

To take a screenshot of your app, open the “Additional Tools” and switch to the “Screenshot” tab. Then click on the “Capture” button, which will take a screenshot and show you an instant preview. If you want to keep the screenshot click on the “Save…” button. If you don’t like the screenshot you took, ignore it or click “Capture” again.
clip_image018
The screenshot tool always takes screenshots at 100% (indicated by the resolution in the bottom left corner), regardless of Zoom setting. They are also always portrait, regardless of rotation chosen.

Install APKs through drag and drop

You install apps on Android through an application package file which is known as an APK. If you have an APK that you want to install on the Visual Studio Emulator for Android, just drag it onto the emulator from Windows Explorer. You will see a message in the emulator indicating progress “File transfer in progress…” followed by a message box “File foo installed successfully in Android”. Remember to make sure your APKs have code built for x86!
You can also drag and drop other (non-APK) files to the emulator and they will be placed onto the SD Card, which brings us to the next topic.

SD Card

If your app has a need to read or write to the SD card of the target, the emulator simulates that by making available a folder representing an SD card.
Note that the Android image uses a separate VHD for SD card support. So if you want to transfer files to/from the SD card on your development machine, you can mount the VHD to Windows: Close the emulator (to shut down the VM), then navigate to the VHD location in Windows explorer, and double click the VHD to mount it. By default the VHD is located under the path:
C:\Users\%username%\AppData \Local\Microsoft\XDE\Android\vsemu.sdcard.vhd
At this point, the VHD is mounted as an additional drive to Windows and you can use it pretty much like any other drive. Before restarting the emulator you must un-mount the VHD, which you can do by right clicking on the drive and selecting Eject.
Having SD card support in the image also allows other built-in Android apps to function, such as the browser downloads and the camera app – which brings me to the next capability.

Camera

Typically you’d be using the camera from your app (using an appropriate API), and we support that. You can also use the built-in camera app directly. When you launch the camera in the emulator you will see a fixed animated image that you can take a snapshot of, simulating taking a picture.

Audio Playback, Keyboard Text Input…

There are other capabilities that the emulator provides that are taken for granted, even though they require “work” from the product team :-). I won’t list them all here but two of them are that:
  • you can use your computer’s keyboard to enter text in the emulator
  • any audio coming from the emulator can be heard through your computer’s speakers

Configurations

With this Preview release you can pick between two out of the box configurations:
  • Typical Android Phone: 5” Screen, 295 PPI, 720x1280, 1024 MB
  • Typical Android Tablet: 7” Screen, 315 PPI, 1080x1920, 2048 MB
With the Preview bits if you want to change the amount of memory, you can change the Startup RAM in the Settings dialog from the Hyper-V Manager. Notice that there you can also change the number of cores allocated to each configuration (the default for Preview is 2 cores). Caveat: we have not tested all possible configurations you could choose!
clip_image020
We are just getting started, there is a lot more to come in subsequent releases and you can help us prioritize new sensor simulation and other capabilities by taking our survey.

A peek under the covers

Conceptually, an emulator consists of 4 pieces:

  1. virtual machine (represented as a vhd) of the target you are emulating, in this case Android. We started with the source at the Android Open Source Project (AOSP), evolved it, and configured an x86 virtual image for fast Visual Studio debugging.
  2. A small shell/chrome that as a user you see and interact with, which loads the virtual image and projects it through a rendering control. Think of this as remote desktop: you are essentially RDPing to the image. We started with the desktop application that is the shell/chrome of the Windows Phone Emulator (internally known as XDE), which is already rich in functionality. Then we made modifications for our Android-specific needs.
  3. A virtualization technology that XDE needs to load the image before it can RDP to it. Windows has a great virtualization technology called Hyper-V and that is what we used.
  4. The connection pipeline between VS and XDE and also between the debug engine and the virtual image. Here we reused parts of what existed between XDE and Visual Studio, and also the Android Debug Bridge (ADB) channel.
Now let’s look at some of the limitations we have today, and hopefully you can give us input on which ones we need to address first.

Current limitations

Today we are sharing with you an early preview release, with issues/bugs that we look forward to you reporting to us. We also have known limitations – please tell us which ones are most important to you so we can prioritize these on our backlog:
  • If your app makes direct or indirect use of OpenGL 2 or higher, that will not render on our emulator yet. This support is coming soon, and looking at an early internal-only build that I have it does make the image feel even snappier!
  • There are many different versions of Android on the market. The one you have with this release of the Visual Studio Emulator for Android is KitKat API 19 (version android-4.4.4_r1). More versions coming later…
  • If your app takes advantage of the Google Play Services layer then it will not work out of the box in our emulator. That is because when building our Android images we do not include the GMS packages (which require additional licensing that we do not have yet).
  • You need to recompile your code for x86. If you have parts of your code that can only be compiled for ARM, or you depend on 3rd-party libraries for which you do not have an x86 version, your code will not run on our emulator at this point.
  • You can only install the Visual Studio Emulator for Android on an operating system where Hyper-V is supported. Examples of where Hyper-V is not supported include Windows 7, non-Windows machines, and inside another VM.