In-Memory Analytics – Get Ahead Of The Regulators
Posted: 6 March 2013 | Author: Georges Bory | Source: Quartetfs
It is unsurprising that the Financial Stability Board (FSB) advised the Basel Committee on Banking Supervision to prioritise the management of operational risk in its current agenda – given the turbulence Britain’s bank saw in 2012 and already this year. The RBS IT glitch, Libor fixing and claims of PPI mis-selling, were but a few of the IT failures to hit the headlines.
Operational problems could occur across a number of business areas, due to overly complex and poorly managed processes, inadequate IT infrastructure and needless to say, fraud. But many banks still do not have access to the necessary information which is needed to monitor all these areas effectively, and traditional risk limits are not up to the task in hand. With most banks completing tens of thousands of rules and checks on a daily basis, they must be able to monitor trends and thereby identify breaches as and when they occur.
For example, a trader who has a daily position making £8,000 profit on average could gradually move up to £20,000 a day over a period of weeks or months. This change illustrates an operational risk – it’s beyond the trend. It’s likely that risk limits are being breached, protocols are being ignored, or some other factor is meaning that the things are not ‘business as usual’.
Compounding the problem, we’re talking about massive sets of data, of different types – the three Vs of ‘Big Data’: variety, velocity and volume. Big Data makes identifying these operational changes even more difficult. Heterogeneous data sources need to be analysed every day, in real-time, and presented in a simple, consolidated format which is easily understandable by staff across different business areas. Banks need a consolidated view of market risk and profit and loss (PnL) in one place, something organisations are beginning to wake up to in the lead up to Basel III.
In order to make sense of Big Data with regards operational risk, banks are increasingly turning to real-time aggregation and analytics engines which suck in the data from multiple disparate silos and calculate sophisticated KPIs ‘on the fly’. The data is processed ‘in memory’, meaning in the cache or RAM – importantly, not disk-based. Querying data sets ‘in-memory’ dramatically speeds up the response time, while increasing reliability. When it comes to regulatory reporting, having real-time risk data at the CRO’s fingertips, which is easily manipulated ‘on the fly’ is imperative.
The fact is that banks will never be able to perfectly protect themselves against every possible risk. However, there are many preventable aspects of operational risk that banks can and should do more to address. Financial institutions tend to have 80% of the problem already solved – they already calculate PnL and market risk. But what they must do now is look at the technology and data available to them in a consolidated manner, to make the most of the information they already have. Managing operational risk is ultimately about getting ahead of the risk trend, something only technology can help with.