Sunday 30 November 2014

Big data & Traders


Bankers beware – big data are watching you. A recent swath of trading scandals has spurred big banks to turn to new technology and data sources as they attempt to crackdown on illegal behaviour by their staff.



Financial institutions are increasingly moving beyond employing traditional compliance systems, which have focused on monitoring electronic communications and transaction prices, and using state of the art surveillance software as they seek to stay one step ahead of wily bankers and traders.

The drive to incorporate behavioural science and new data sources comes after analysis of electronic messages has helped regulators ensnare bankers accused of rigging interbank lending rates and, most recently, foreign exchange rates. That behaviour has led to a swath of multibillion-dollar fines and settlements.

Front Office

The worry now is that bank employees will go underground to engage in illicit behaviour, prompting an internal race as compliance officers seek to root out malfeasance by “front office” staff.
“The days of traders saying something really dumb, which then gets picked up by a filter are largely gone,” said Michael O’Brien, global head of SMARTS Broker, Nasdaq’s market surveillance business.Banks have long worried that staff may turn to personal cell phones or social media platforms such as Snapchat or Facebook, to avoid having their work communications monitored by compliance systems.

To combat that risk, some compliance departments are benchmarking staff’s performance against average use of internal communications – trying to detect discrepancies between their profitability and the number of messages sent to clients.
“If an average trader’s been hitting the same benchmark or better than peers on their desk but the volume of messages they’re sending on internal messages is significantly less, that’s a red flag,” says Varun Mehta, vice-president of legal and compliance solutions at Clutch Group. “Are they using a burner phone or something else?”
The holy grail is marrying up the communication surveillance with trading surveillance. If someone is really determined to do something illegal it makes it that much more difficult to detect
- US bank lawyer

Surveillance

To combat the use of personal phones, banks are also restricting mobile phone usage on trading floors to certain frequencies that can be monitored. Key card data and human resources information may also be examined to ensure bankers and traders are not taking too many “smoke breaks”,

The increasingly high-tech surveillance of bankers and traders has raised legal issues with financial institutions, particularly in Europe where banks may struggle to reconcile increasingly heavy compliance requirements with local privacy laws.

Communications analysis – alongside transaction analysis, still the predominate form of compliance monitoring – is becoming more sophisticated too, with algorithms and artificial intelligence being used to identify patterns of speech and networks of contacts as opposed to merely catching keywords.
Bill Nosal, head of product development at SMARTS Broker, said traders were unlikely to use obvious statements such as “it’s time to set up our insider trading ring or manipulate a market. That’s why it’s sometimes better to do the linkages of communications and tie them to abnormal trading”.

My assignment this week

Recently I spent some time in a foreign exchange dealing room in London.

The key, was to bring senior traders or former bankers into the compliance function to help identify strategies or unusual patterns that can be worked into the system. The holy grail for the client is marrying up the communication surveillance with trading surveillance. If someone is really determined to do something illegal it makes it that much more difficult to detect.

Natural Language analysis

Fonetic, a Spanish software company that analyses audio and written communications, launched in the US this year. Its software – utilised by clients such as BBVA and Santander – uses real-time phrase recognition technology, as opposed to analysing transcripts, and is capable of flagging related words and phrases to broaden its search capabilities. For example, if a bank is trying to identify traders talking about bananas, it could use the software to search not only “banana” but also “yellow,” “long”, “fruit,” and “monkey”.

The message to banking trade desks is clear: banks are shining an increasingly bright spotlight on all corners of their business.

Friday 28 November 2014

Virtual Machines… The Holy Grail of Local Web Development

Like so many other developers I started out working on plain HTML sites on my local computer and then using FTP to send them to a remote server where the world could get to them. That worked great until dynamic sites came around and I could no longer test my code locally resulting in a, well, less than perfect workflow. You see, once I start getting into dynamic sites (I started with Cold Fusion and ASP classic) I developed a habit of developing directly on a remote server, usually the production site, with tools like Dreamweaver that allowed me to connect and work directly on the remote machine. It was in fact this workflow that kept me a loyal Dreamweaver user for the better part of a decade as nothing else at the time could compete with this type of workflow very well.
The next step in my development evolution was a local server. When I was still on Windows I would use XAMPP or even IIS and later when I moved to Mac I discovered MAMP Pro and finally AMPPS. For the first time I had a true Apache, MySQL, PHP stack on my local computer that, at least for major versions, could mimic the bulk of my server environments with only minimal modifications. It worked well but it wasn’t perfect. Switching computers could be a nightmare and should anything in the stack become corrupt I was in for some serious trouble. But at least I wasn’t working on the remote server anymore. This workflow was good, but just not good enough.
So for the last 5 years I’ve played around with various combinations of MAMP Pro, AMPPS,Homebrew, XAMPP and a few other solutions in search of the perfect server that would mimic my production development almost perfectly, be easily portable and would require a bare minimum amount of maintenance.
Throughout this time, and in parallel to my quest for the perfect development environment, I have been a heavy user of another now popular technology, virtual machines, which I use to test sites on Windows, Ubuntu and other platforms where I could replicate neither the browser nor the operating system very well using my Mac. While this technology was rather cumbersome 5 years ago when I started using it, today it has matured to the point where you can use a virtual machine as easily as you can use a word processor and with about the same performance level you would expect from the host OS.
While I’ve been developing locally and using virtual machines for quite some team I had never been able to successfully combine the two goals. Sure I had heard of Puppet, Chef and Vagrant but they seemed to me to be anything but mature and far more of a hassle than getting a proper MAMP or AMPPS environment.
Finally, last weekend Mark Jaquith changed my opinion on all of this with his WordCamp San Francisco talk titled Confident Commits, Delightful Deploys. He pointed out just how mature Vagrant and Puppet had become and how easily they could be utilized to build a local development environment utilizing the same OS and packages I use in production (Ubuntu, Apache, NGINX, etc) and requiring only minutes to setup or tear down once the initial configuration was complete.
The Holy Grail of development environments has been found and it doesn’t require a whole heck of a lot get started.
  1. Install Virtualbox and its Guest Additions. This is the virtualization engine that will allow your new development environment to run.
  2. Install Vagrant. It doesn’t matter what your OS is, this is a free and easy download package that serves as a wrapper around the Virtual machine in Virtualbox. Once configured it will download the image you need, set up the virtual machine and pass it off to a provisioning script to make sure everything you need is installed and configured.
  3. Get a base configuration. This is easy with sites like https://puphpet.com/ which allow you to configure what you need for a basic development environment. I started with it and then customized it to meet my needs. Mostly I just changed a few variables to make sure it worked for me a little easier out of the box. You can find my modified configuration on GitHub if you’re interested.
  4. Start working. Once you have a script getting a development environment is as easy as going to the location of the script you downloaded and typing vagrant up in the terminal.

Why Use Virtual Machines?

Yeah, it’s a bit of work to set this all up but once you do there are some serious benefits to this workflow.
  1. If you’re in a team everyone will be developing on the same environment. We’re not there at my day job yet but we’re getting there. Once implemented it won’t matter if folks are on Windows, Mac or Linux. The development environment will be the same throughout allowing us to spend far less time as a team debugging.
  2. Your development environment can match your production environment. If you’re deploying to a given environment why not work in that same environment? Again this saves time and sanity in debugging as you’re no longer changing configurations as you push your project up the line.
  3. Speed and efficiency. Once you have Vagrant, an install script, and Virtualbox you can switch machines, run up or down new environments and generally get to coding a lot faster without having to spend hours tweaking your local setup to match your servers as closely as possible. For me this is the real benefit at the moment. I can work on any machine I happen to have handy and I no longer have to worry about either setup or whether I’ve configured the new machine to match the old. It just works.

Room to Improve

Of course, as I’ve only been using this for a couple of days there is still room for improvement in my setup. In particular I need to work on both my Puppet script and Vagrant box to take into account running PHP over fastcgi as well as a couple of other changes. It’s darn close and I’m sure with a little more time I’ll have it pretty much perfect. Fortunately bringing new environments up and down is so easy that I really have nothing to lose with tweaking it out.

Give it a try

One of the best things about trying a new method like this is you don’t have to give up the way you currently work. I admit I did delete AMPSS from my computer about a day after starting with Vagrant and Puppet as I simply don’t need it anymore but there is no reason you have to. There is nothing to conflict with so your old setup so the only thing you have to lose is a few minutes of your time and an obsolete way of dealing with local development.

Wednesday 19 November 2014

Enterprise Security

Microsoft has acquired Aorato, an innovator in enterprise security. My team are using this acquisition to give customers a new level of protection against threats through better visibility into their identity infrastructure.  

FeaturedImage_2014-11-18_Aorata_6477472927_f8624d5cf1_b

With Aorato we will accelerate our ability to give customers powerful identity and access solutions that span on-premises and the cloud, which is central to our overall hybrid cloud strategy.

We all know corporate security is more important than ever. Nearly every day there are more headlines about breaches, fraud and data loss. Unfortunately, compromised passwords, stolen identities and network intrusion are a fact of life. Companies need new, intelligent solutions to help them adapt and defend themselves inside the network, not just at its edge.

Aorato’s sophisticated technology uses machine learning to detect suspicious activity on a company’s network. It understands what normal behavior is and then identifies anomalies, so a company can quickly see suspicious behavior and take appropriate measures to help protect itself. Key to Aorato’s approach is the Organizational Security Graph, a living, continuously-updated view of all of the people and machines accessing an organization’s Windows Server Active Directory (AD).

AD is used by most enterprises to store user identities and administer access to critical business applications and systems. Therefore, most of our enterprise customers should be able to easily take advantage of Aorato’s technology. This will complement similar capabilities that we have developed for Azure Active Directory, our cloud-based identity and access management solution.

We are excited about the technology that Aorato has built and, especially, the people joining the Microsoft team through this acquisition.

In the mobile first, cloud first era, Microsoft is committed to moving nimbly and aggressively to provide customers with solutions to their top challenges.

Friday 14 November 2014

.Net Goes Open Source

Here's my rollup and take on the situation.
  • Microsoft are serious about open source and cross platform.
    • .NET Core 5 is the modern, componentized framework that ships via NuGet. That means you can ship a private version of the .NET Core Framework with your app. Other apps' versions can't change your app's behavior.
    • They are building a .NET Core CLR for Windows, Mac and Linux and it will be both open source and it will be supported by Microsoft. It'll all happen at https://github.com/dotnet.
    • They are open sourcing the RyuJit and the .NET GC and making them both cross-platform.
  • ASP.NET 5 will work everywhere.
    • ASP.NET 5 will be available for Windows, Mac, and Linux. Mac and Linux support will come soon and it's all going to happen in the open on GitHub at https://github.com/aspnet.
    • ASP.NET 5 will include a web server for Mac and Linux called kestrel built on libuv. It's similar to the one that comes with node, and you could front it with Nginx for production, for example.
  • Developers should have a great experience.
    • There is a new FREE SKU for Visual Studio for open source developers and students called Visual Studio CommunityIt supports extensions and lots more all in one download. This is not Express. This is basically Pro.
    • Visual Studio 2015 and ASP.NET 5 will support gulp, grunt, bower and npm for front end developers.
    • A community team (including myself and Sayed from the ASP.NET and web tools team have created theOmniSharp organization along with the Kulture build system as a way to bring real Intellisense to Sublime, Atom, Brackets, Vim, and Emacs on Windows, Linux, and Mac. Check out http://www.omnisharp.net as well as blog posts by team members Jonathan Channon
  • Even more open source.
    • Much of the .NET Core Framework 4.6 and its Reference Source source is going on GitHub. It's being relicensed under the MIT license, so Mono (and you!) can use that source code in their .NET implementations.
    • There's a new hub for Microsoft open source that is hosted GitHub at http://microsoft.github.io.
Open sourcing .NET makes good sense. It makes good business sense, good community sense, and today everyone at Microsoft see this like we do.

Wednesday 12 November 2014

Android on Visual Studio

Microsoft released Visual Studio 2015 Preview this week and with it you now have options for Android development. When choosing one of those Android development options, Visual Studio will also install the brand new Visual Studio Emulator for Android to use as a target for debugging your app. 
clip_image001

Before I walk you through using this new emulator, let’s talk about why we are building an emulator for Android – feel free to skip the next section to go to the interesting part :-)

The need for an emulator for Android

We know that emulators can play a key part in the edit-compile-debug cycle (bigger part than devices) and we believe that you need an emulator like the one we are releasing today.
Having a great emulator to debug against doesn’t mean you don’t need a device, and having a device to debug against doesn’t mean you won’t benefit from a good emulator. They are complementary.
You definitely need to test against a device for the following scenarios which are unsuitable for any emulator:
  1. Measuring the performance characteristics of your code. While an emulator can help you with correctness issues, it will never perfectly emulate the performance characteristics of your code running on the actual devices that you want to test against. You want to measure the performance as your users see it.
  2. Testing hardware-specific issues. If what you are trying to test is the touch-responsiveness of your game, or the speaker quality for your media app, you will want to do that type of testing on the target devices. Ditto if you are trying to work around an OEM-specific bug.
  3. Purely evaluating the actual user experience in real-world situations, e.g. do your designed interactions work for a user walking around using your app one handed with just their thumb alone?
For all other testing, which as part of your edit-compile-debug cycle normally takes at least 80% of your time, you’d want to use an emulator (barring other blocking issues or limitations with your emulator of choice). Use an emulator for the following reasons:
  1. The majority of your testing is for correctness issues (not performance) and the majority of your code is probably not dealing with hardware specific issues. So use an emulator!
  2. You don’t want to spend a bunch of money buying a bunch of devices (and keep doing so every time a new device appears on the market), just to test things like screen resolution, DPI settings for different screen sizes, different API levels / platform versions, when you can configure that in software (in an emulator).
  3. You don’t want to have to take physical action with your device to test some sensor, e.g. respond to movement or location changes or simulating network/battery changes. Instead you want to simulate the sensor values easily and quickly in an emulator, e.g. simulate a trip to another town while your app responds to the change of location.
  4. There is also the convenience element. Connecting to a device (typically dealing with cables), managing that connection and its lifetime, using one of your USB ports, is not as simple as launching the emulator and treating it like every other desktop application running on your dev machine.
So emulators are great and can be a key part in the edit-compile-debug cycle and we want to make sure that our emulator is best-in-class. You have told us about several pain points with existing emulators that we are starting to address with our release:
  • Slow. This is the number one complaint we’ve heard from Android developers. “The emulator is painfully slow, it hurts my productivity, and I’ll use a device.” Slow is not acceptable. If anything, using the emulator should be faster than using a device so you can test your scenarios faster (remember, you are not using emulators to test the performance of our code, you just need them to be as fast as possible for your own use).
  • Conflict with Hyper-V on Windows. Many emulators require you to disable Hyper-V or don’t work as well with Hyper-V as they do without. Using Hyper-V is part of the development setup for many developer activities, so asking you to restart your machine (multiple times a day) to toggle Hyper-V is not acceptable.
  • o One specialized variant of this is using the Windows Phone emulator (which itself is based on Hyper-V). It is a real pain having to make changes and reboot every time you want to switch from an Android emulator to a Windows Phone emulator to test your cross-platform code.
  • Additional acquisition and installation step. If your main development environment is Visual Studio, you don’t want to have to acquire the emulator separately and follow a separate installation process.
  • Separate cost. Having a great emulator that can cost you as much as your main development environment is not an option for most. The Visual Studio Emulator for Android comes with VS without additional charge.
In short, we will address all of those pain points with the Visual Studio Emulator for Android. Now, let’s recap Visual Studio’s debugging story for Android and how to choose the VS Emulator for Android.

Debugging against the Visual Studio Emulator for Android

With Visual Studio 2015 Preview you can target Android and edit-compile-debug regardless of your choice of programming models: JavaScript (or TypeScript) with Cordova, C++, or C# with Xamarin.
With all three of those choices, when you start debugging, you must first choose a target. That target can be a device, or it can be one of many emulators that you may have running on your machine. Let’s see how to choose a debug target for Cordova and C++ in Visual Studio 2015 Preview, and for Xamarin in Visual Studio 2013.
With C++ projects, the Debug Target menu looks like this:
clip_image003
With Cordova projects you will want to pick the last two entries in the Debug Target menu as per the following screenshot:
clip_image005(definitely avoid picking the option “Android Emulator” as that is the slow one that comes with the SDK)
With Xamarin projects, the option looks like this:
clip_image006
For best results with Xamarin projects, disable/uncheck "Use Fast Deployment" under the Android Options under the Xamarin Project Properties.
Note: If you want to use the VS Emulator for Android from a different IDE, as a temporary workaround you can always launch our emulator from Visual Studio using one of the options above, then close that project and leave the emulator running and available for your other IDE to target (over ADB).
Once you have chosen your debug target and hit F5, your app will be deployed to the emulator, as per the regular VS debugging flow you can hit breakpoints in your code, see the call stack, inspect variables, etc. So now that you know how to use the emulator for debugging, let’s continue exploring its cool features!

Sensor simulations and other capabilities of the Visual Studio Emulator for Android

Beyond using the emulator as a deployment target, you can also take advantage of sensor simulation and other capabilities – let’s examine a few of those, in no particular order.
clip_image008

Zoom

You can change the size of the emulator as you see it on your development machine (the host). The dots per inch (DPI) for the emulator is based on the host monitor DPI, regardless of the zoom value. This allows you to scale the emulator in case it is taking too much space on your desktop.
To change the size, use the “Zoom” button on the emulator’s vertical toolbar.
You can also use the “Fit to Screen” button above the “Zoom” button to fit the emulator on your screen.
If you are going to take screenshots of your app running in the emulator (e.g. with the Snipping tool) for best results remember to set the zoom level to the maximum of 100% - or even better, use our built-in Screenshot tool support that I describe below.

Orientation / Rotation

Unless your app only supports a fixed orientation, you should test how your app responds to orientation changes, and what it looks like in portrait, left-landscape, and right-landscape orientations. Simply rotate the emulator left or right with the two corresponding buttons on the vertical toolbar: “Rotate Left” and “Rotate Right”. The size of the emulator remains the same when you rotate.

Network Info

The emulator reuses the network connection of the host machine, so there is nothing for you to configure.
You can also review the emulator’s current network settings. On the vertical toolbar click on the “Tools” button to show the “Additional Tools” fly out panel, and then click on the “Network” tab.
clip_image010

Location (GPS)

If your app does anything with navigation, geofencing, walking/biking/driving, then you will love the location and driving simulation in the emulator under the “Location” tab when you open the “Additional Tools”.
clip_image012
You can navigate the map by dragging it around, by zooming/in and out, or even by searching for a location. You can place and remove pins on the map, thus creating map points. Those appear as latitude longitude coordinates in the list in the bottom left. From the toolbar at the top you can even save those map points to an XML file and later load them from the file.
Instead of having each map point immediately change the GPS location of the emulator (“Live” mode), you have other options too! You may want to place a few map points and then simulate transitioning between those points. To do that, at the toolbar at the top switch from “Live” mode to “Pin” mode. Then you can press the small play button at the end of the toolbar to transition between the map points. You can even enter a transition interval (in seconds).
Finally, you can choose a third mode that is similar to “Pin”, which is called “Route” mode. In this mode you can also simulate transitions between the points but with some additional twists. The simulator will calculate an actual path between the points and generate invisible points at 1 second intervals between the points you choose. The overall speed at which it will play those points is determined by a second setting and your options are: “Walking” (5 kilometers per hour), “Biking” (25 km/h), “Speed Limit” (variable dependent on map point), and “Fast”.

Accelerometer

If your app tracks and responds to movement of the phone, you can test them using the “Accelerometer” tab when you open the “Additional Tools”.
clip_image014
Simply click and hold the red dot in the middle and drag it towards the directions you want to simulate, within the 3D plane. As you do that your app will receive movement events if it has registered for them.
You can also see the X, Y, Z values in the bottom left. Under those values you can “Reset” to the starting position, and also pick the starting Orientation from these values: Portrait Standing, Landscape Standing, Portrait Flat, and Landscape Flat.
Lastly you can simulate the phone shaking by clicking the “Play” button in the bottom right. The only visual indication that a shake is taking place are the values of the X,Y,Z and when they stop rapidly changing you’ll know the shake is over.

Power/Battery Simulation (and Power button)

If you write your app to respond to battery charge changes, then you will like the emulator’s ability to simulate that by switching to the “Battery” tab when you open the “Additional Tools”.
There is a slider that allows you to set the exact charge value of the battery. Notice as you slide down/up how the battery icon in the top right changes to reflect the change. Your app can also respond accordingly.
If you change the Battery Charging State to not be “Charging”, then the emulator’s screen will go blank after a timeout period. You can configure the timeout though the built-in regular "Settings" app (look for the “Sleep” option under “Display”). If the emulator sleeps due to this, then you can wake it up through the “Power” button on the vertical toolbar.
clip_image016

Screenshot

To take a screenshot of your app, open the “Additional Tools” and switch to the “Screenshot” tab. Then click on the “Capture” button, which will take a screenshot and show you an instant preview. If you want to keep the screenshot click on the “Save…” button. If you don’t like the screenshot you took, ignore it or click “Capture” again.
clip_image018
The screenshot tool always takes screenshots at 100% (indicated by the resolution in the bottom left corner), regardless of Zoom setting. They are also always portrait, regardless of rotation chosen.

Install APKs through drag and drop

You install apps on Android through an application package file which is known as an APK. If you have an APK that you want to install on the Visual Studio Emulator for Android, just drag it onto the emulator from Windows Explorer. You will see a message in the emulator indicating progress “File transfer in progress…” followed by a message box “File foo installed successfully in Android”. Remember to make sure your APKs have code built for x86!
You can also drag and drop other (non-APK) files to the emulator and they will be placed onto the SD Card, which brings us to the next topic.

SD Card

If your app has a need to read or write to the SD card of the target, the emulator simulates that by making available a folder representing an SD card.
Note that the Android image uses a separate VHD for SD card support. So if you want to transfer files to/from the SD card on your development machine, you can mount the VHD to Windows: Close the emulator (to shut down the VM), then navigate to the VHD location in Windows explorer, and double click the VHD to mount it. By default the VHD is located under the path:
C:\Users\%username%\AppData \Local\Microsoft\XDE\Android\vsemu.sdcard.vhd
At this point, the VHD is mounted as an additional drive to Windows and you can use it pretty much like any other drive. Before restarting the emulator you must un-mount the VHD, which you can do by right clicking on the drive and selecting Eject.
Having SD card support in the image also allows other built-in Android apps to function, such as the browser downloads and the camera app – which brings me to the next capability.

Camera

Typically you’d be using the camera from your app (using an appropriate API), and we support that. You can also use the built-in camera app directly. When you launch the camera in the emulator you will see a fixed animated image that you can take a snapshot of, simulating taking a picture.

Audio Playback, Keyboard Text Input…

There are other capabilities that the emulator provides that are taken for granted, even though they require “work” from the product team :-). I won’t list them all here but two of them are that:
  • you can use your computer’s keyboard to enter text in the emulator
  • any audio coming from the emulator can be heard through your computer’s speakers

Configurations

With this Preview release you can pick between two out of the box configurations:
  • Typical Android Phone: 5” Screen, 295 PPI, 720x1280, 1024 MB
  • Typical Android Tablet: 7” Screen, 315 PPI, 1080x1920, 2048 MB
With the Preview bits if you want to change the amount of memory, you can change the Startup RAM in the Settings dialog from the Hyper-V Manager. Notice that there you can also change the number of cores allocated to each configuration (the default for Preview is 2 cores). Caveat: we have not tested all possible configurations you could choose!
clip_image020
We are just getting started, there is a lot more to come in subsequent releases and you can help us prioritize new sensor simulation and other capabilities by taking our survey.

A peek under the covers

Conceptually, an emulator consists of 4 pieces:

  1. virtual machine (represented as a vhd) of the target you are emulating, in this case Android. We started with the source at the Android Open Source Project (AOSP), evolved it, and configured an x86 virtual image for fast Visual Studio debugging.
  2. A small shell/chrome that as a user you see and interact with, which loads the virtual image and projects it through a rendering control. Think of this as remote desktop: you are essentially RDPing to the image. We started with the desktop application that is the shell/chrome of the Windows Phone Emulator (internally known as XDE), which is already rich in functionality. Then we made modifications for our Android-specific needs.
  3. A virtualization technology that XDE needs to load the image before it can RDP to it. Windows has a great virtualization technology called Hyper-V and that is what we used.
  4. The connection pipeline between VS and XDE and also between the debug engine and the virtual image. Here we reused parts of what existed between XDE and Visual Studio, and also the Android Debug Bridge (ADB) channel.
Now let’s look at some of the limitations we have today, and hopefully you can give us input on which ones we need to address first.

Current limitations

Today we are sharing with you an early preview release, with issues/bugs that we look forward to you reporting to us. We also have known limitations – please tell us which ones are most important to you so we can prioritize these on our backlog:
  • If your app makes direct or indirect use of OpenGL 2 or higher, that will not render on our emulator yet. This support is coming soon, and looking at an early internal-only build that I have it does make the image feel even snappier!
  • There are many different versions of Android on the market. The one you have with this release of the Visual Studio Emulator for Android is KitKat API 19 (version android-4.4.4_r1). More versions coming later…
  • If your app takes advantage of the Google Play Services layer then it will not work out of the box in our emulator. That is because when building our Android images we do not include the GMS packages (which require additional licensing that we do not have yet).
  • You need to recompile your code for x86. If you have parts of your code that can only be compiled for ARM, or you depend on 3rd-party libraries for which you do not have an x86 version, your code will not run on our emulator at this point.
  • You can only install the Visual Studio Emulator for Android on an operating system where Hyper-V is supported. Examples of where Hyper-V is not supported include Windows 7, non-Windows machines, and inside another VM.

Tuesday 11 November 2014

DMAIC or Kaizen?

There is a long standing debate about which improvement methodology your company should adopt - Lean or Six Sigma? And if you peel back the onion a little further, does it really come down to a debate between Kaizen and DMAIC? 
The question comes up from time to time and this morning, a client ran this by me....

Hmm, it got me thinking.
In a fight between the two approaches, who would win?
In the blue corner we have the Kaizen approach, sponsored by Lean. In the red corner, we have the DMAIC approach, sponsored by Six Sigma:




Lean is -Bottom-up employee-led change using Kaizen events

Six Sigma is -Top-down Blackbelt-led change using DMAIC

What is Kaizen?
Kaizen is simply a Japanese word that simply means “to make better” Its main characteristics are:
  • Projects are well defined and baseline stats are collected before starting event
  • Dedicated resources are subject matter experts (SMEs) and focus on only the event
  • The solutions should come from the SMEs as they will need to act as champions for the change
  • Often follows the Deming/Shewhart cycle of Plan-Do Check-Act
  • Kaizen event typically lasts 3 – 5 days        
  • Mangement MUST make resources available from support functions during the event. i.e. HR, Finance, Warehouse, Sales, Marketing etc
  • Will implement solutions based on 80% confidence instead of 95% which is typical in DMAIC
  • Implementation is completed within the week event but if items fall outside they are completed  within 20 days
  • Basic analysis is acceptable with indicative results enough to make decisions

What is DMAIC? 
DMAIC is an acronym for a 5 step process where the 1st letter of each stage spells out DMAIC being, Define, Measure, Analyse, Improve & Control. Its main characteristics are:
  • Existing process is not meeting customer requirements but the reason why is not obvious
  • Time is spent on analyzing the baseline data to understand current performance
  • Baseline data is used to prove/validate the benefits once re-measured
  • Solutions can come from anywhere and may not be popular with employees as may mean  significant changes
  • Solutions require 95% confidence in being correct before implementation
  • Can be a level of risk associated with the solution that will need to be accepted by the business before implementation
  • Change is led by a Blackbelt or Greenbelt  due to the nature of the data analysis
Now that we understand the 2 approaches a little better, which continuous improvement tool should you use? Kaizen or DMAIC? Well, we think there is a place for both in a modern organization, depending on what it is being used for. We’ve put together this simple matrix to help can easily be determined with the answer to a few simple considerations?

Interestingly, many practitioners have started to plan out their Kaizen events using the DMAIC process steps. This way they ensure that everything is considered. So it may not be a case of Kaizen vs DMAIC after all but instead, a powerful combination of both!
A hybrid approach - the third way:)

Monday 10 November 2014

Towards Agile Architecture


Architecture provides the foundation from which systems are built and an architectural model defines the vision on which your architecture is based.

The scope of architecture can be that of a single application, of a family of applications, for an organization, or for an infrastructure such as the Internet that is shared by many organizations.Regardless of the scope, my experience is that you can take an agile approach to the modeling, development, and evolution of an architecture.

Here are a few ideas to get you thinking:

There is nothing special about architecture. Heresy you say! Absolutely not. Agile Modeling's value of humility states that everyone has equal value on a project, therefore anyone in the role of architect and their efforts are just as important but no more so than the efforts of everyone else. Yes, good architects have a specialized skillset appropriate to the task at hand and should have the experience to apply those skills effectively. The exact same thing can be said, however, of good developers, of good coaches, of good senior managers, and so on. Humility is an important success factor for your architecture efforts because it is what you need to avoid the development of an ivory tower architecture and to avoid the animosity of your teammates. The role of architect is valid for most projects, it just shouldn't be a role that is fulfilled by someone atop a pedestal.

SDCF Overview

You should beware ivory tower architectures. An ivory tower architecture is one that is often developed by an architect or architectural team in relative isolation to the day-to-day development activities of your project team(s).The mighty architectural guru(s) go off and develop one or more models describing the architecture that the minions on your team is to build to for the architect(s) know best. Ivory tower architectures are often beautiful things, usually well-documented with lots of fancy diagrams and wonderful vision statements proclaiming them to be your salvation.

In theory, which is typically what your architect(s) bases their work on, ivory tower architectures work perfectly. However, experience shows that ivory tower architectures suffer from significant problems. First, the "minion developers" are unlikely to accept the architecture because they had no say in its development.

Second, ivory tower architectures are often unproven, ivory tower architects rarely dirty their hands writing code, and as a result are a significant risk to your project until you know they actually work through the concrete feedback provided by a technical prototype. Third, ivory tower architectures will be incomplete if the architects did nothing else other than model because you can never think through everything your system needs.Fourth, ivory tower architectures promote overbuilding of software because they typically reflect every feature ever required by any system that your architect(s) were ever involved with and not just the features that your system actually needs.

Every system has an architecture. BUT, it may not necessarily have architectural models describing that architecture. For example, a small team taking the XP approach that is working together in the same room may not find any need to model their system architecture because everyone on the team knows it well enough that having a model doesn't provide sufficient value to them. Or, if an architectural model exists it will often be a few simple plain old whiteboard (POW) sketches potentially backed by a defined project metaphor.

This works because the communication aspects of XP, including pair programming and Collective Ownership, negate the need for architecture model(s) that need to be developed and maintained throughout the project. Other teams - teams not following XP, larger teams, teams where people are not co-located - will find that the greater communication challenges inherent in their environment requires them to go beyond word-of-mouth architecture. These teams will choose to create architectural models to provide guidance to developers as to how they should build their software. Fundamentally, the reason why you perform architectural modeling is to address the risk of members of your development team not working to a common vision.

Architecture scales agile. This is true of traditional techniques as well. Have a viable and accepted architecture strategy for a project is absolutely critical to your success, particularly in the complex situations which agile teams find themselves in at scale. Scaling issues include team size, regulatory compliance, distributed teams, technical complexity, and so on (see The Software Development Context Framework (SDCF) for details).

An effective approach to architecture enables you to address these scaling issues.


Wednesday 5 November 2014

Investment Banking in the Cloud

On a recent flight home after our meeting with a large bank, I started reflecting on how the conversations about cloud computing with clients have changed over the last 12 to 24 months. 
In 2012 and 2013, a lot of the conversations where focused on “what is cloud computing,” “help us build a cloud strategy” or “how do we automate our infrastructure.” As we near the end of 2014 these conversations have changed drastically. Most progressive enterprises are knowledgeable about all of the different cloud service models (IaaS, PaaS, and SaaS), have researched the major vendors, have started executing on their cloud strategy, and have become experts at managing the IaaS layer. The focus now appears to be moving up the stack towards the application layer.

2015: The year of cloud applications

Many enterprises have already laid the basic foundation work for their clouds and we’re seeing a mixture of private and public clouds being implemented with a high level of automation at the infrastructure layer. Enterprises have invested a lot of time into implementing guardrails around their clouds so that developers can consume the cloud services in a secure and compliant manner. The “build it” part of the “build it and they will come” strategy is complete, now it is time to get the applications and the developers to join the party. I believe that 2015 will be the coming out party for PaaS. It remains to be seen if enterprises will buy into pure PaaS platforms, leverage PaaS capabilities via an IaaS provider, or roll their own by leveraging a collection of tools like Docker. I believe the answer is all of the above. Almost every account I go into, the client is either evaluating PaaS or doing a proof of concept with one or more PaaS platforms and their interest is far greater now than it was at the end of 2013.
 Photo credit: http://danielcrane.us
Photo credit: http://danielcrane.us
DevOps is taking enterprises by storm
At the beginning of 2014, DevOps was not even in the vocabulary of many of our enterprise clients. Around mid-summer, we started seeing interest in DevOps, and now DevOps is front and center in almost every conversation. I am not sure what triggered the heightened interest, but DevOps is definitely on the CxO’s wish list right now. DevOps is where cloud was back in 2012. Most of our conversations are “what is DevOps?,” “Help us put together a strategy,” “how can we implement continuous integration and continuous delivery?,” “what tools do we need?”. Many clients have already started their DevOps journey but are not seeing the results they expected. I attribute the struggles to the following reasons:
  • IT focused solely on technology and skipped the people and process part
  • IT pushed operations to development without providing proper tools and service design
  • The current SDLC and service management creates too many bottlenecks
  • Current solutions work for a team of pioneers but do not scale organizationally
In 2015, these organizations will need to re-evaluate their SDLC and operations processes and figure out how to streamline processes and remove waste. We worked with one client to automate a lot of the manual gates in their ITIL processes so that the process still ensured high quality and reliability, but did not get in the way of rapidly deploying software. It is time for enterprises to move beyond the technology and reassess their organizational structures and operating models if they want to see the promised land that DevOps strives for: Speed to market with quality and reliability.
IT is becoming a cloud service provider
Another common theme I see is that the CxOs understand the value proposition of the cloud but they also realize that if they don’t govern it they will be repeating the sins of the past. If you go into any large organization today you will see chaos everywhere. Most technologies have been implemented with little to no consistency. This creates a lot of waste and makes it extremely difficult to make changes that allow applications to interact with each other. CxOs are taking these lessons learned and are building their own guardrails around the third party cloud solutions to offer their own flavor of cloud to their developers. To say it another way, they are becoming the AWS of their company. These organizations are creating cloud teams who pick cloud solutions, be it AWS, OpenStack, VMWare, etc., and wrap them in their own layer of abstraction in order to enforce the cloud principles that are important to them. This is extremely common in health care and financial services institutions.
From what I have seen, enterprises have made a lot of progress with this model from the technology standpoint but are struggling with the operating model. Becoming a service provider is a radical change from running a datacenter. In 2015, enterprises will have to put more focus on the people and process aspects of this transformational change
Summary
Enterprises have made a lot of progress with their cloud initiatives throughout 2014. I am impressed with how far the industry has come in just the last 12 months. The problem I see is that while enterprises have made great advancements with the technology, they are hitting walls with the people and process part of the equation. 
CxOs who have invested heavily in private and hybrid cloud infrastructures are going to be focusing heavily on getting more applications deployed to justify those investments over the last two years. 2015 is going to be a make or break year for many enterprise cloud initiatives. Enjoy your time off in this holiday season because the real hard battles start next January. Make sure those IT budgets have some big line items for organizational and process transformation.