Wednesday 22 October 2014

iOS and Xamarin

Deciding between native iOS and Xamarin (MonoTouch)

An old friend of mine, who I worked with back when I was a .NET developer, emailed me recently. He was looking for some advice on which platform to use for creating iOS software, mostly focusing on native versus Xamarin (MonoTouch). My answer does not delve into specifics about Xamarin, because it’s been years since I used their platform in earnest. It does, however, express my opinion on native vs. non-native. He left out other options, such as PhoneGap, perhaps because he simply was not yet aware that they exist.
Below is his email, followed by my reply. I don’t expect this to settle the matter once and for all (where’s the fun in that?!) but perhaps it will give others a new perspective on the topic.
His Question
Since I have started playing around with iOS, two questions that came to my mind and I thought I should ask you.
To be or not to be native iOS?
Xamarin or not to Xamarin?
My real purpose is
  • to create some productivity apps for handheld devices and
  • to create some re-usable control libraries.
It will be great if you can share your expertise on this :-)
My Reply
Your question is a good one, which I’ve discussed with many people. So far my answer is broken down into three considerations:
  1. Reach – Is getting the same codebase onto as many devices as possible a high priority, or are you more interested in keeping the app on one operating system for the first launch (in order to create the best app possible – no compromises)?
  2. User Experience – Are you willing to make sacrifices in the user experience to use a non-native layer (ex. building HTML-based UI via PhoneGap)?
  3. Budget – Can you afford to build a separate app for each platform? If so, that’s probably the best route since you won’t need to rely on lowest common denominator solutions that bring their own bugs and limitations to the table.
In your particular case, the re-usable control libraries should be native because I doubt that such performance-sensitive, low-level rendering code would be reusable via Xamarin’s platform across different OS’s. You can always wrap your native libs with their runtime wrapping API, which I read about a while back, and use them from Xamarin apps if necessary.
Regarding the productivity apps, I don’t know enough details about your situation to say anything definite. Using Xamarin for those apps might make sense, and you will probably have a temporary advantage of avoiding the learning curve associated with native iOS programming (note: I wrote temporary). Truth be told, learning native iOS programming isn’t all that difficult, it just takes time and a little brain rewiring. Also, when working with Xamarin, or PhoneGap, you often end up dealing with Objective-C code anyways (plug-ins, StackOverflow examples, etc).
I have a bias toward native since it is inherently better than any 3rd party layer, and I’m not interested in dealing with the bugs of Apple and another company as well. Unfortunately, technical preferences don’t usually drive business decisions!

Tuesday 21 October 2014

FATCA Compliance

Key Challenges

  • Among your customers’ accounts spread across your institution’s lines of businesses, geographies and classification of relationships, how do you efficiently identify US account holder individuals and their substantial ownership in entities?
  • How do you pull customer data from disparate systems, along with their financial transactions, to build a unified view of your US customers, to serve as starting point of FATCA compliance?
  • How do you undertake comprehensive FATCA due diligence for new customers and carry out the requisite review and remediation for the existing customers?
  • How do you build FATCA compliance infrastructure, which would be flexible, adaptable and scalable enough to meet similar future tax compliance reporting requirements of other countries?
These are just some of the questions that our clients run by me when planning for a Cloud based compliance solution.

KYCsphere Solution

Next generation Microsoft Azure Cloud based KYCsphere solution offering for Foreign Account Tax Compliance Act (FATCA) compliance streamlines identification of new US customers/accounts with the help of Customer On-Boarding Tool


Based on US indicia the tool identifies not only the seemingly apparent US customers/accounts among the individual account holders, but through Customer Due Diligence Tool driven process goes on to discover substantial US ownership and hidden beneficial ownerships, across complex legal entities and corporate structures. Once such customer/account relationships are identified, requisite documentation as per the customer/account classification could be requested and captured within KYCsphere. These data rich profiles with documentary evidences are routed through roles and rules based workflow within KYCsphere, across your financial institution. Further investigation could be performed with the help of Enhanced Due Diligence Tool, for high risk FATCA non-compliant customers, including the recalcitrant customers.

US indicia identified preexisting customers/accounts, are required to be further filtered, where the aggregated assets and transactions exceed the FATCA prescribed thresholds. For such cases their current profiles would require review and FATCA remediation, including attaching documentary evidences/proofs and certifications. These tasks could be performed within KYCsphere through detailed profiling to be undertaken by the operational team and if required Enhanced Due Diligence done by the compliance team. The tool further supports seeking additional information from relationship managers for high risk and the recalcitrant customers. Year end reporting of US customers’ data, including that of recalcitrant customers, is done either to the IRS directly or to the regulatory mechanism within the Foreign Financial Institution’s (FFI’s) country, as per required reporting format.

From new Customer On-boarding to monitoring existing ones and conducting Due Diligence, while offering reporting and withholding support, cloud based KYCsphere toolkit takes care of complete lifecycle of FATCA compliance. It leverages your institutions US customers/accounts current data along with provisioning for additional FATCA remediation data fields and documents to be captured, in order to feed secured FATCA repository on the cloud, dedicated for your institution. Over this repository, KYCsphere application and its underlying FATCA engine is available to perform FATCA compliance tasks across multiple lines of businesses, multiple jurisdictions and as per the requirements under multiple IGAs and treaties, in a cost effective pay-as-you-go fashion.

This cloud based KYCsphere toolkit with its dedicated FATCA repository would thus make your institution compliant in short span of time and offer regulatory flexibility to comply with similar tax compliance regulations of other countries. Your institution could achieve this without any capital expenditure and with the least of IT intervention in your existing legacy systems. 

Key Benefits

  • Build cost effective pay-as-you-go FATCA compliance centralized repository on the cloud and keep adapting and scaling it for similar constantly changing tax regulations compliance for other countries too, in the foreseeable future, without ever incurring capital expenditure, costs of software licensing, annual maintenance contracts, upgrades etc.
  • Implement single FATCA compliance platform for multiple jurisdictions, for countries with or without FATCA Inter-Governmental Agreements (IGAs).
  • Start by adding Customer On-boarding of new customers, followed by Due Diligence and FATCA remediation of the pre-existing and conclude by building FATCA reporting capabilities. Do this across your institution’s lines of businesses, geographies and classification of relationships, incrementally as per multi-year FATCA compliance deadlines.
  • On a single FATCA application platform help your compliance team collaborate with operational team, relationship managers as well as senior management, while seeking additional FATCA remediation data and documentation of pre-existing customers/accounts.


My advice...

Plan your FATCA compliance and considering using the cloud to continue to be complaint in future without disrupting your current operations by making changes in the existing systems or incurring additional costs of maintenance and upgrades.


Sunday 19 October 2014

Future of Payments

While working on a recent project for a client in the banking sector, we researched various trends and ideas about the future of banking and payments. That’s what I will try to share with you below.



The driving forces of change


The factors and user needs that are driving change in the banking sector are as follows:

The increasing use of, and dependence on, mobile devices (or simply small screens)
The need for a ‘democratisation of payments’. This in particular has influenced start-ups to create solutions that make it possible for everyone to accept payments by card despite cost and technology barriers. Sharing bank account details with other people can be tiresome. Who remembers their account number and sort code by heart?


  • Social media is a place for organisations to engage with their customers but some banks and start ups are starting to use it in more innovative ways
  • The need for financial and budgeting advice
  • Fees are still high for some transactions, whereas speed of transaction execution isn’t satisfactory for international payments
  • Despite all the above, we should never forget the need for better security
  • So how have these factors impacted on the world of digital payments and online banking?

Mobile payments


How many times have you felt disappointment at your local store for not accepting card payments or applying a charge to transactions on payments under a certain amount? Targeting mostly small merchants who can’t offer card payments because of hardware and IT costs, start-ups have turned tablets and smartphones into card processors. All you need is an app and a small card reader which is attached to the phone.



However, is this approach really worth the pain – can business with small merchants be profitable? Rumour has it that Visa have invested in Square, an innovative start-up in this domain. Some big banks, like Santander and Lloyds, are also supporting similar projects.

Who’s already doing it?


  • Square are the pioneers in mobile payment
  • Groupon with Breadcrumb, Worldpay with Zinc and iZettle
  • Utilising NFC and other technologies, digital wallets are another form of mobile payments.

Payment without bank details


Searching for bank details every time you want to set up or accept an online payment is tiresome. Some debit cards don’t even have sort codes on them! Soon, however, you might only need to know the recipient’s email address or mobile number.

Who’s already doing it?


NatWest allow their mobile app users to pay their contacts if they too have a visa card. They also have prize draws to motivate users to try it out. This service is powered by Visa Personal Payments

  • PayPal offer payments to email addresses or phone numbers. Again the only limitation is that the both parties (payer and recipient) must have a PayPal account
  • Google offer their Gmail users the ability to send money as an email attachment if they are using Google Wallet
  • Social banking (Face-banking)


Would you follow a bank on Facebook or Twitter? You might if their posts were relevant to your interests. A few banks have succeeded in creating a content strategy for their social networks to engage and grow their following. For example, BBVA and Barclays gained thousands of followers by posting about football. In between football updates, they also tweeted about their product and services.

Following financial institutions on social networks is one thing but how about accessing your account balance through Facebook and even sending money to your friends? Facebook banking apps allow you to utilise the capabilities of web banking without having to leave your beloved(!) profile. Facebook even guaranteed they won’t have access to your financial data…

Who’s already doing it?


  • Australian Commonwealth Bank
  • New Zealander ASB Bank with a virtual branch on FB
  • Nigerian GTBank
  • Financial planning: ‘spend, save and live smarter’


If you want to be fancy in the financial sector nowadays you have to offer tools that help your customers manage their finances in a smarter way. Again, start-ups are leaders in this domain and offer alternatives which are commonly superior to those of traditional financial institutions. Here’s what personal financial management (PFM) tools offer:


  • Track and compare with past spending/saving behaviour
  • Show where money is spent
  • Option to setup saving goals
  • Advice on what is safe to spend and whether spending some money will affect your financial health, e.g. future planned payments or saving goals
  • Making the experience more pleasant with gamified messages and instructions


Who’s already doing it?

Moven
Mint
OnTrees
Money Dashboard
Some of these services might sound a bit extraordinary or risky (like logging in and banking through Facebook…). The truth is that there isn’t accurate data out there to validate the success or failure of these services. 

Time will tell.


Saturday 18 October 2014

Is Design Dead?

For many that come briefly into contact with Extreme Programming, it seems that XP calls for the death of software design. Not just is much design activity ridiculed as "Big Up Front Design", but such design techniques as the UML, flexible frameworks, and even patterns are de-emphasized or downright ignored. In fact XP involves a lot of design, but does it in a different way than established software processes. XP has rejuvenated the notion of evolutionary design with practices that allow evolution to become a viable design strategy. It also provides new challenges and skills as designers need to learn how to do a simple design, how to use refactoring to keep a design clean, and how to use patterns in an evolutionary style.

Extreme Programming (XP) challenges many of the common assumptions about software development. Of these one of the most controversial is its rejection of significant effort in up-front design, in favor of a more evolutionary approach. To its detractors this is a return to "code and fix" development - usually derided as hacking. To its fans it is often seen as a rejection of design techniques (such as the UML), principles and patterns. Don't worry about design, if you listen to your code a good design will appear.
I find myself at the center of this argument. Much of my career has involved graphical design languages - the Unified Modeling Language (UML) and its forerunners - and in patterns. Indeed I've written books on both the UML and patterns. Does my embrace of XP mean I recant all of what I've written on these subjects, cleansing my mind of all such counter-revolutionary notions?
Well I'm not going to expect that I can leave you dangling on the hook of dramatic tension. The short answer is no. The long answer is the rest of this paper.

Planned and Evolutionary Design

For this paper I'm going to describe two styles how design is done in software development. Perhaps the most common is evolutionary design. Essentially evolutionary design means that the design of the system grows as the system is implemented. Design is part of the programming processes and as the program evolves the design changes.
In its common usage, evolutionary design is a disaster. The design ends up being the aggregation of a bunch of ad-hoc tactical decisions, each of which makes the code harder to alter. In many ways you might argue this is no design, certainly it usually leads to a poor design. As Kent puts it, design is there to enable you to keep changing the software easily in the long term. As design deteriorates, so does your ability to make changes effectively. You have the state of software entropy, over time the design gets worse and worse. Not only does this make the software harder to change, it also makes bugs both easier to breed and harder to find and safely kill. This is the "code and fix" nightmare, where the bugs become exponentially more expensive to fix as the project goes on.
Planned Design is a counter to this, and contains a notion born from other branches of engineering. If you want to build a doghouse, you can just get some wood together and get a rough shape. However if you want to build a skyscraper, you can't work that way - it'll just collapse before you even get half way up. So you begin with engineering drawings, done in an engineering office like the one my wife works at in downtown Boston. As she does the design she figures out all the issues, partly by mathematical analysis, but mostly by using building codes. Building codes are rules about how you design structures based on experience of what works (and some underlying math). Once the design is done, then her engineering company can hand the design off to another company that builds it.
Planned design in software should work the same way. Designers think out the big issues in advance. They don't need to write code because they aren't building the software, they are designing it. So they can use a design technique like the UML that gets away from some of the details of programming and allows the designers to work at a more abstract level. Once the design is done they can hand it off to a separate group (or even a separate company) to build. Since the designers are thinking on a larger scale, they can avoid the series of tactical decisions that lead to software entropy. The programmers can follow the direction of the design and, providing they follow the design, have a well built system
Now the planned design approach has been around since the 70s, and lots of people have used it. It is better in many ways than code and fix evolutionary design. But it has some faults. The first fault is that it's impossible to think through all the issues that you need to deal with when you are programming. So it's inevitable that when programming you will find things that question the design. However if the designers are done, moved onto another project, what happens? The programmers start coding around the design and entropy sets in. Even if the designer isn't gone, it takes time to sort out the design issues, change the drawings, and then alter the code. There's usually a quicker fix and time pressure. Hence entropy (again).
Furthermore there's often a cultural problem. Designers are made designers due to skill and experience, but they are so busy working on designs they don't get much time to code any more. However the tools and materials of software development change at a rapid rate. When you no longer code not just can you miss out on changes that occur with this technological flux, you also lose the respect of those who do code.
This tension between builders and designers happens in building too, but it's more intense in software. It's intense because there is a key difference. In building there is a clearer division in skills between those who design and those who build, but in software that's less the case. Any programmer working in high design environments needs to be very skilled. Skilled enough to question the designer's designs, especially when the designer is less knowledgeable about the day to day realities of the development platform.
Now these issues could be fixed. Maybe we can deal with the human tension. Maybe we can get designers skillful enough to deal with most issues and have a process disciplined enough to change the drawings. There's still another problem: changing requirements. Changing requirements are the number one big issue that causes headaches in software projects that I run into.
One way to deal with changing requirements is to build flexibility into the design so that you can easily change it as the requirements change. However this requires insight into what kind of changes you expect. A design can be planned to deal with areas of volatility, but while that will help for foreseen requirements changes, it won't help (and can hurt) for unforeseen changes. So you have to understand the requirements well enough to separate the volatile areas, and my observation is that this is very hard.
Now some of these requirements problems are due to not understanding requirements clearly enough. So a lot of people focus on requirements engineering processes to get better requirements in the hope that this will prevent the need to change the design later on. But even this direction is one that may not lead to a cure. Many unforeseen requirements changes occur due to changes in the business. Those can't be prevented, however careful your requirements engineering process.
So all this makes planned design sound impossible. Certainly they are big challenges. But I'm not inclined to claim that planned design is worse than evolutionary design as it is most commonly practiced in a "code and fix" manner. Indeed I prefer planned design to "code and fix". However I'm aware of the problems of planned design and am seeking a new direction.

The Enabling Practices of XP

XP is controversial for many reasons, but one of the key red flags in XP is that it advocates evolutionary design rather than planned design. As we know, evolutionary design can't possibly work due to ad hoc design decisions and software entropy.
At the core of understanding this argument is the software change curve. The change curve says that as the project runs, it becomes exponentially more expensive to make changes. The change curve is usually expressed in terms of phases "a change made in analysis for $1 would cost thousands to fix in production". This is ironic as most projects still work in an ad-hoc process that doesn't have an analysis phase, but the exponentiation is still there. The exponential change curve means that evolutionary design cannot possibly work. It also conveys why planned design must be done carefully because any mistakes in planned design face the same exponentiation.
The fundamental assumption underlying XP is that it is possible to flatten the change curve enough to make evolutionary design work. This flattening is both enabled by XP and exploited by XP. This is part of the coupling of the XP practices: specifically you can't do those parts of XP that exploit the flattened curve without doing those things that enable the flattening. This is a common source of the controversy over XP. Many people criticize the exploitation without understanding the enabling. Often the criticisms stem from critics' own experience where they didn't do the enabling practices that allow the exploiting practices to work. As a result they got burned and when they see XP they remember the fire.
There are many parts to the enabling practices. At the core are the practices of Testing, and Continuous Integration. Without the safety provided by testing the rest of XP would be impossible. Continuous Integration is necessary to keep the team in sync, so that you can make a change and not be worried about integrating it with other people. Together these practices can have a big effect on the change curve. I was reminded of this again here at ThoughtWorks. Introducing testing and continuous integration had a marked improvement on the development effort. Certainly enough to seriously question the XP assertion that you need all the practices to get a big improvement.
Refactoring has a similar effect. People who refactor their code in the disciplined manner suggested by XP find a significant difference in their effectiveness compared to doing looser, more ad-hoc restructuring. That was certainly my experience once Kent had taught me to refactor properly. After all, only such a strong change would have motivated me to write a whole book about it.
Jim Highsmith, in his excellent summary of XP, uses the analogy of a set of scales. In one tray is planned design, the other is refactoring. In more traditional approaches planned design dominates because the assumption is that you can't change your mind later. As the cost of change lowers then you can do more of your design later as refactoring. Planned design does not go away completely, but there is now a balance of two design approaches to work with. For me it feels like that before refactoring I was doing all my design one-handed.
These enabling practices of continuous integration, testing, and refactoring, provide a new environment that makes evolutionary design plausible. However one thing we haven't yet figured out is where the balance point is. I'm sure that, despite the outside impression, XP isn't just test, code, and refactor. There is room for designing before coding. Some of this is before there is any coding, most of it occurs in the iterations before coding for a particular task. But there is a new balance between up-front design and refactoring.

The Value of Simplicity

Two of the greatest rallying cries in XP are the slogans "Do the Simplest Thing that Could Possibly Work" and "You Aren't Going to Need It" (known as YAGNI). Both are manifestations of the XP practice of Simple Design.
The way YAGNI is usually described, it says that you shouldn't add any code today which will only be used by feature that is needed tomorrow. On the face of it this sounds simple. The issue comes with such things as frameworks, reusable components, and flexible design. Such things are complicated to build. You pay an extra up-front cost to build them, in the expectation that you will gain back that cost later. This idea of building flexibility up-front is seen as a key part of effective software design.
However XP's advice is that you not build flexible components and frameworks for the first case that needs that functionality. Let these structures grow as they are needed. If I want a Money class today that handles addition but not multiplication then I build only addition into the Money class. Even if I'm sure I'll need multiplication in the next iteration, and understand how to do it easily, and think it'll be really quick to do, I'll still leave it till that next iteration.
One reason for this is economic. If I have to do any work that's only used for a feature that's needed tomorrow, that means I lose effort from features that need to be done for this iteration. The release plan says what needs to be worked on now, working on other things in the future is contrary to the developers agreement with the customer. There is a risk that this iteration's stories might not get done. Even if this iteration's stories are not at risk it's up to the customer to decide what extra work should be done - and that might still not involve multiplication.
This economic disincentive is compounded by the chance that we may not get it right. However certain we may be about how this function works, we can still get it wrong - especially since we don't have detailed requirements yet. Working on the wrong solution early is even more wasteful than working on the right solution early. And the XPerts generally believe that we are much more likely to be wrong than right (and I agree with that sentiment.)
The second reason for simple design is that a complex design is more difficult to understand than a simple design. Therefore any modification of the system is made harder by added complexity. This adds a cost during the period between when the more complicated design was added and when it was needed.
Now this advice strikes a lot of people as nonsense, and they are right to think that. Right providing that you imagine the usual development world where the enabling practices of XP aren't in place. However when the balance between planned and evolutionary design alters, then YAGNI becomes good practice (and only then).
So to summarize. You don't want to spend effort adding new capability that won't be needed until a future iteration. And even if the cost is zero, you still don't want to add it because it increases the cost of modification even if it costs nothing to put in. However you can only sensibly behave this way when you are using XP, or a similar technique that lowers the cost of change.

What on Earth is Simplicity Anyway

So we want our code to be as simple as possible. That doesn't sound like that's too hard to argue for, after all who wants to be complicated? But of course this begs the question "what is simple?"
In XPE Kent gives four criteria for a simple system. In order (most important first):
  • Runs all the Tests
  • Reveals all the intention
  • No duplication
  • Fewest number of classes or methods
Running all the tests is a pretty simple criterion. No duplication is also pretty straightforward, although a lot of developers need guidance on how to achieve it. The tricky one has to do with revealing the intention. What exactly does that mean?
The basic value here is clarity of code. XP places a high value on code that is easily read. In XP "clever code" is a term of abuse. But some people's intention revealing code is another's cleverness.
In his XP 2000 paper, Josh Kerievsky points out a good example of this. He looks at possibly the most public XP code of all - JUnit. JUnit uses decorators to add optional functionality to test cases, such things as concurrency synchronization and batch set up code. By separating out this code into decorators it allows the general code to be clearer than it otherwise would be.
But you have to ask yourself if the resulting code is really simple. For me it is, but then I'm familiar with the Decorator pattern. But for many that aren't it's quite complicated. Similarly JUnit uses pluggable methods which I've noticed most people initially find anything but clear. So might we conclude that JUnit's design is simpler for experienced designers but more complicated for less experienced people?
I think that the focus on eliminating duplication, both with XP's "Once and Only Once" and the Pragmatic Programmer's DRY (Don't Repeat Yourself) is one of those obvious and wonderfully powerful pieces of good advice. Just following that alone can take you a long way. But it isn't everything, and simplicity is still a complicated thing to find.
Recently I was involved in doing something that may well be over-designed. It got refactored and some of the flexibility was removed. But as one of the developers said "it's easier to refactor over-design than it is to refactor no design." It's best to be a little simpler than you need to be, but it isn't a disaster to be a little more complex.
The best advice I heard on all this came from Uncle Bob (Robert Martin). His advice was not to get too hung up about what the simplest design is. After all you can, should, and will refactor it later. In the end the willingness to refactor is much more important than knowing what the simplest thing is right away.

Does Refactoring Violate YAGNI?

This topic came up on the XP mailing list recently, and it's worth bringing out as we look at the role of design in XP.
Basically the question starts with the point that refactoring takes time but does not add function. Since the point of YAGNI is that you are supposed to design for the present not for the future, is this a violation?
The point of YAGNI is that you don't add complexity that isn't needed for the current stories. This is part of the practice of simple design. Refactoring is needed to keep the design as simple as you can, so you should refactor whenever you realize you can make things simpler.
Simple design both exploits XP practices and is also an enabling practice. Only if you have testing, continuous integration, and refactoring can you practice simple design effectively. But at the same time keeping the design simple is essential to keeping the change curve flat. Any unneeded complexity makes a system harder to change in all directions except the one you anticipate with the complex flexibility you put in. However people aren't good at anticipating, so it's best to strive for simplicity. However people won't get the simplest thing first time, so you need to refactor in order get closer to the goal.

Patterns and XP

The JUnit example leads me inevitably into bringing up patterns. The relationship between patterns and XP is interesting, and it's a common question. Joshua Kerievsky argues that patterns are under-emphasized in XP and he makes the argument eloquently, so I don't want to repeat that. But it's worth bearing in mind that for many people patterns seem in conflict to XP.
The essence of this argument is that patterns are often over-used. The world is full of the legendary programmer, fresh off his first reading of GOF who includes sixteen patterns in 32 lines of code. I remember one evening, fueled by a very nice single malt, running through with Kent a paper to be called "Not Design Patterns: 23 cheap tricks" We were thinking of such things as use an if statement rather than a strategy. The joke had a point, patterns are often overused, but that doesn't make them a bad idea. The question is how you use them.
One theory of this is that the forces of simple design will lead you into the patterns. Many refactorings do this explicitly, but even without them by following the rules of simple design you will come up with the patterns even if you don't know them already. This may be true, but is it really the best way of doing it? Surely it's better if you know roughly where you're going and have a book that can help you through the issues instead of having to invent it all yourself. I certainly still reach for GOF whenever I feel a pattern coming on. For me effective design argues that we need to know the price of a pattern is worth paying - that's its own skill. Similarly, as Joshua suggests, we need to be more familiar about how to ease into a pattern gradually. In this regard XP treats the way we use patterns differently to the way some people use them, but certainly doesn't remove their value.
But reading some of the mailing lists I get the distinct sense that many people see XP as discouraging patterns, despite the irony that most of the proponents of XP were leaders of the patterns movement too. Is this because they have seen beyond patterns, or because patterns are so embedded in their thinking that they no longer realize it? I don't know the answers for others, but for me patterns are still vitally important. XP may be a process for development, but patterns are a backbone of design knowledge, knowledge that is valuable whatever your process may be. Different processes may use patterns in different ways. XP emphasizes both not using a pattern until it's needed and evolving your way into a pattern via a simple implementation. But patterns are still a key piece of knowledge to acquire.
My advice to XPers using patterns would be
  • Invest time in learning about patterns
  • Concentrate on when to apply the pattern (not too early)
  • Concentrate on how to implement the pattern in its simplest form first, then add complexity later.
  • If you put a pattern in, and later realize that it isn't pulling its weight - don't be afraid to take it out again.
I think XP should emphasize learning about patterns more. I'm not sure how I would fit that into XP's practices, but I'm sure Kent can come up with a way.

Growing an Architecture

What do we mean by a software architecture? To me the term architecture conveys a notion of the core elements of the system, the pieces that are difficult to change. A foundation on which the rest must be built.
What role does an architecture play when you are using evolutionary design? Again XPs critics state that XP ignores architecture, that XP's route is to go to code fast and trust that refactoring that will solve all design issues. Interestingly they are right, and that may well be weakness. Certainly the most aggressive XPers - Kent Beck, Ron Jeffries, and Bob Martin - are putting more and more energy into avoiding any up front architectural design. Don't put in a database until you really know you'll need it. Work with files first and refactor the database in during a later iteration.
Essentially I think many of these areas are patterns that we've learned over the years. As your knowledge of patterns grows, you should have a reasonable first take at how to use them. However the key difference is that these early architectural decisions aren't expected to be set in stone, or rather the team knows that they may err in their early decisions, and should have the courage to fix them. Others have told the story of one project that, close to deployment, decided it didn't need EJB anymore and removed it from their system. It was a sizeable refactoring, it was done late, but the enabling practices made it not just possible, but worthwhile.
How would this have worked the other way round. If you decided not to use EJB, would it be harder to add it later? Should you thus never start with EJB until you have tried things without and found it lacking? That's a question that involves many factors. Certainly working without a complex component increases simplicity and makes things go faster. However sometimes it's easier to rip out something like that than it is to put it in.

...................................................

So my advice is to begin by assessing what the likely architecture is. If you see a large amount of data with multiple users, go ahead and use a database from day 1. If you see complex business logic, put in a domain model. However in deference to the gods of YAGNI, when in doubt err on the side of simplicity. Also be ready to simplify your architecture as soon as you see that part of the architecture isn't adding anything.

Thursday 16 October 2014

Sophos aims for unified cloud security nirvana with Mojave acquisition

With the purchase of Mojave Networks, Sophos seeks to combine cloud security, endpoint security and advanced filtering to deliver hybrid protection for real-time scenarios. 

cloud security.jpg
With massive breaches affecting everything from retail establishments to Hollywood stars, one has to wonder if there is a better way to protect data in transit and at rest, and if anyone has discovered a process to make that become a reality.
Sophos, known for its desktop security products and cloud-based security services, is aiming to build a more secure cloud byacquiring Mojave Networks, a San Mateo, California-based startup that came to market with cloud-based security solutions.
Mojave fills an important hole in the Sophos product lineup, which only just recently moved into cloud-based security. With the acquisition, Sophos aims to integrate Mojave's primary services into a unified cloud security platform -- those services include cloud-based network security, cloud-based app security and Mobile Device Management (MDM).
The combination of Mojave's offerings with Sophos's cloud, mobile cloud protection systems and its network/end-user/server protection products (appliances, virtual appliances and software) should help Sophos to deliver cloud-based security that is always up to date and can deal with the latest unified threats.
Other companies looking to play in the unified, cloud-based security space include Cisco, Symantec, Dell, and numerous antivirus vendors. However, IT pros have long had to turn to cloud services vendors, along with firewall vendors and antimalware vendors, to hobble together a solution that offers something akin to a complete security solution. If Sophos can pull off the integration of Mojave into its cloud security offerings, the company may be able to offer the unified security nirvana that so many are seeking.
The advantages offered by security services unification cannot be underestimated. First and foremost is the ideology of a unified security dashboard, which eases deploying security across multiple platforms, devices and connections. What's more, better reporting naturally follows a unified management system, where all the bits and pieces of security are well aware of each other and can offer a better look at how things are secured.
Nevertheless, what security vendors claim and what the real-world challenges do not always jibe, which begs the question: What should unified security offer and why?
  • Antimalware: One of the first elements to look for in a security package is how it deals with malware. Better products include everything from link scanners to antivirus tools to real-time (cloud-based) updates.
  • Antiphishing: One of today's biggest security problems is phishing, where embed links in emails can be used to launch malicious websites that gather information or install spyware on systems. Beyond educating end-users not to open suspicious emails, it is critical to have a service (or software) that detects phishing attempts and puts a stop to them.
  • Content filtering: One of the best ways to limit a user from visiting a malicious site is by leveraging content filtering, where websites are blocked based upon ratings/content and so forth. If a user cannot access a malicious site, security is vastly improved.
  • MDM: For organizations that place workers in the field, it is critical to have control of the devices they use remotely or while traveling. A good MDM system will enforce passwords, keep data encrypted and provide a way to either wipe a lost device or help to locate it.
  • SQL injection protection: Many breaches come from a blunt force attack, where malicious code is inserted into a database, forcing the database to return results that may reveal private information. A device or cloud service should be in place to prevent that from happening.
  • Advanced Persistent Threat (APT) protection: APTs are one of the latest maladies to impact network security. Those engineered attacks may knit together many smaller attacks on what may seem to be unrelated systems to sneak malware past traditional security products. Unified security can effectively combat APTs by putting the pieces back together, validating or blocking the code.
  • Antispam: Spam can be a major security problem for most any email user. Preventing spam from entering the network proves to be a key capability to protect end users and their resources, and it is best done before the email enters the network.
  • Firewall: Multiple firewalls can provide layers of protection. A unified offering can tie together a next-generation firewall at the edge of the network with a locally installed desktop firewall to plug any potential holes. However, local firewalls need to be managed to be effective, and that is where a unified security package comes into play.
  • Intruder detection and prevention: Keeping unauthorized users out proves to be one of the better ways to prevent data loss and compromises. An effective security system is able to work hand in hand with security directories, firewalls and VPNs to make sure the user is actually the intended user. This works better when managed under a unified system, which could also leverage two-factor authentication and enterprise level LDAP/ADS type directories.
  • Wi-Fi security: Hotspot connectivity is often overlooked. Whether or not that hotspot is internal or located in a coffee shop isn't the real issue -- the real issue is how the traffic travels via the hotspot. Encryption combined with SSL or VPN services becomes a must-have to protect data in the ether; a unified security package should provide the software to secure Wi-Fi traffic and detect when traffic is traveling in the clear (unprotected).
Only by combining the above into a centrally managed offering can one hope to achieve true unified security. After all, security is made up of many moving pieces, and without management some of those pieces are bound to fail.
Hopefully, by Sophos combining what were once separate security offerings into a unified platform, the company can lead the way for competitors to identify those same threats and help to bring forth multiple competitive offerings that can only improve security.

Wednesday 15 October 2014

HIPAA Compliance Company Policies

Handle PHI? Adopt these free 25 policies and be one step closer to HIPAA compliance.

HIPAA compliance is complicated, but it doesn't have to be. In an effort to make compliance as easy as possible for companies working with protected health information (PHI), we decided to open source our HIPAA policies.

These policies have been written with modern, cloud-based technology vendors in mind. I looked far and wide for policy examples that fit our client, and couldn't find any. So we wrote our own. Importantly, these policies have been through three external audits—two HIPAA audits and one HITRUST audit.


For the Modern Cloud Company

Because we crafted these policies for ourselves, we had the profile of a modern cloud healthcare company in mind. They are tailored specifically for you, including our business associate agreement (BAA).

Audited 3 Times Over

These policies have gone through two HIPAA audits and one HITRUST audit. They have been validated by independent third parties.