Skip to main content

Computation v Security

Last month I wrote a piece about computation, data and connections with a view to starting to list out some considerations for each of these with respect to non-functionals... This is part one on computation and security.

In terms of computation, we're talking about code that does stuff, the stuff that performs the logical processing on the data and using those connections.

From a security perspective we're primarily concerned with access control. Conceptually this is a question of who is allowed to do what, where, when and how.

  • Who - in essence covering authentication and identification. The who may be human or system. There are many ways to authenticate users and I would strongly advise you use an off-the-shelf component. Most application servers (JEE or .NET) will have built in ways to authenticate users against LDAP or AD etc. These will have been tested for security (penetration testing) and will be more secure than any home-grown solution.

  • What - authorisation to execute the functionality provided (e.g. create-order, send-email, press-the-big-red-button). Again, lots of standardised ways to check authorisation in J2EE and .NET application servers which should be used. Manual/custom code checks such as "isUserInRole()" can be bypassed if someone gets access to the code (a notable issue with client side JavaScript for example).

  • Where - more access-control - where the code is located. Note that where-ever this is we need to question how much you trust that location. Does it require physical security? Is it acceptable to run in the users-browser (e.g. JavaScript)? Should it be in a DMZ or a more tightly controlled security zone? If we assume that it gets compromised*, then what? Does this allow unauthorised access?

  • When - there's a touch point here with availability requirements. A lot of code runs 24x7 but many batch-jobs run on a daily, weekly or monthly schedule. Should these be allowed to run outside of predefined windows and what is the impact if they do?

  • How - this is the code itself which is protected through language choice, frameworks, access to source-control systems, code-reviews and the secure development practices. Scripted languages suffer from the potential to be hijacked rather easily, compiled code is harder to subvert since it's unlikely the source-code and compilers are available on the production system (at least they shouldn't be!).


Secure Development


Secure development is a minefield and new vulnerabilities are found all the time. Keep an eye on the OWASP top ten for the most common issues.

Things like SQL injection (or XML**, JS or anything else injection) attacks are normally top of the list. These result from code which is simply concatenating strings together, some of which are supplied by the end user, to form a SQL statement which is thrown at a database. Submitting something like "; drop database;" into a request may not be the best thing. Perhaps worse still an attacker can use these methods to query the database structure and retrieve or modify data they should not be allowed access to.

A common fix for this; in the case of SQL, is to use bind variables and prepared statements (which can also show performance improvements). This will result in the the dodgy data being part of the query/insert itself where it may look a little odd in the database but should do less damage. You should also scan any parameters for suspect characters since this avoids storing such nonsense but in itself isn't a great solution as it leaves you at the mercy of your developers and any failings they have (and don't we all).

Other secure development concerns are:

  • Data validation. Check every parameter is of the correct type, expected range and form (e.g. see above re SQL injection). This often means code at both ends of a web-page (i.e. in JavaScript for user-friendly solutions, and in the server code in case the JS has been hacked). You may choose to trust code within a container as this should have been checked at compile time but anything at two ends of a connection should be checked and with interpreted code you may want to be paranoid if you don't have control over the environment very well.

  • Escaping strings. XSS attacks work by allowing attackers to insert JS and HTML into someone elses browser through your website. The impact can vary from being a minor nuisance to allowing someone to capture personal data or submit requests on behalf of the user unwittingly. Escaping strings will result int the JS/HTML appearing as simple text on the page. If you do want to allow some HTML then you'll need more elaborate parsing.

  • Avoiding buffer overflows. Many modern languages such as Java and C# help you avoid through their own memory management though you need to ensure you keep these frameworks patched and up-to-date. If you code in lower level languages and do your own memory management then you may want to consider the impact if access is granted to memory unwillingly. Validating data and ranges (above) also helps avoid some of these issues.

  • Trapping exceptions and dealing with these effectively; including those that should never happen. I'm a fan of having code which simply throws a complete hissy-fit in the event these things happen. Just bomb out in as brutal a way as is acceptable for simplicities sake as it's usually symptomatic of something more serious. Though you need to ensure you log.

  • Log events and exceptions, record the state as much as is reasonable for later analysis (don't log passwords!) and don't expose this data to the end user. A full stack trace in the browser may be useful during development but just exposes the inner workings of your system to hackers - a useful way to identify potential weak spots.

  • Audit. Where you want to prove traceability then record audit logs of who did what, when, and where. When you get attacked this will help trace back to an IP address, a machine or a user who initiated it. There is then the question of where audit logs are kept and ensuring these can't be subverted else if you need to use them in a court of law at some stage you'll not be able to demonstrate that the logs themselves haven't been tampered with (i.e. non-repudiation).


Buy'v'Build


There is a very (very very very) strong "buy" argument for security. Most commercial security solutions will be paranoid. They'll have been tested and attacked aggressively to ensure they are secure and should have some sort of formal security accreditation.  The supplier should also be responsive to new attack vectors and aware of those which your developers never thought of; such as filtering out track/trace HTTP methods, scanning requests for code injection, limiting data volumes to avoid buffer-overflows etc. If you create your own security solution then your only real security is security through obscurity. With a commercial option you'll get better security, adherence to standards and another layer of security between your code and attackers which can act as a safety net in the event of issues elsewhere.

There are lots of options in terms of frameworks, reverse proxies, intrusion detection, bastion servers, firewalls, anti-virus, directory services etc. which all help to complete the shield and provide protection.

Ultimately, security requirements in relation to computation are concerned with access-control, identification, authentication, authorisation, auditing, and the adoption of good secure development practices.

* I'm always amazed at the reaction of many developers to the risk of something being compromised. They often assume it won't happen and come up with all sorts of shallow arguments "it won't happen, the box is safe under my desk". The point of asking these questions is to step through the argument and consider "what if"? What if a malicious colleague got access to the box? What if the box was stolen from the office (I have seen this happen more than once)? etc. If the rationale is sound and/or the impact of it happening is low then perhaps it's acceptable, if not then something needs to change.

** There was a case I heard of regarding a piece of XSL which would send a transformer into overdrive creating some XML which was many many GB's in size. This was apparently a very small piece of code but the result would be a collapse of the server. It is quite likely some smart cookie out there will work out a way to use whatever language your using against you at some time.

Comments

  1. […] an aside on previous post on computation and security requirements I thought I’d add a note on a notable omission, encryption and […]

    ReplyDelete

Post a comment

Popular posts from this blog

An Observation

Much has changed in the past few years, hell, much has changed in the past few weeks, but that’s another story... and I’ve found a little time on my hands in which to tidy things up. The world of non-functionals has never been so important and yet remains irritatingly ignored by so many - in particular by product owners who seem to think NFRs are nothing more than a tech concern. So if your fancy new product collapses when you get get too many users, is that ok? It’s fair that the engineering team should be asking “how many users are we going to get?”,   or “how many failures can we tolerate?” but the only person who can really answer those questions is the product owner.   The dumb answer to these sort of question is “lots!”, or “none!” because at that point you’ve given carte-blanche to the engineering team to over engineer... and that most likely means it’ll take a hell of a lot longer to deliver and/or cost a hell of a lot more to run. The dumb answer is also “only a couple” and “

Inter-microservice Integrity

A central issue in a microservices environment is how to maintain transactional integrity between services. The scenario is fairly simple. Service A performs some operation which persists data and at the same time raises an event or notifies service B of this action. There's a couple of failure scenarios that raise a problem. Firstly, service B could be unavailable. Does service A rollback or unpick the transaction? What if it's already been committed in A? Do you notify the service consumer of a failure and trigger what could be a cascading failure across the entire service network? Or do you accept long term inconsistency between A & B? Secondly, if service B is available but you don't commit in service A before raising the event then you've told B about something that's not committed... What happens if you then try to commit in A and find you can't? Do you now need to have compensating transactions to tell service B "oops, ignore that previous messag

Equifax Data Breach Due to Failure to Install Patches

"the Equifax data compromise was due to their failure to install the security updates provided in a timely manner." Source: MEDIA ALERT: The Apache Software Foundation Confirms Equifax Data Breach Due to Failure to Install Patches Provided for Apache® Struts™ Exploit : The Apache Software Foundation Blog As simple as that apparently. Keep up to date with patching.