JP Morgan is king of the derivatives. It created the first derivative. It started the business. It is consistently at the top of the list for exposure. And has benefitted enormously from the crisis that derivatives has caused.
So how did that benefit arise?
JP Morgan was one of a handful of banks intimately linked to Bilderberg, such as Goldman Sachs, who during the recent financial crisis rose like a phoenix from the ashes while others burned to a cinder. Their market share grew as they stole customers from their failing competitors.
But they also received billions and billions and billions in bailouts while others in equal dire straits did not. The bailouts were administered selectively so that the Bilderberg banks survived and grew stronger while their competitors were allowed to go under. This occured because Bilderberg had carefully placed its men in key positions; Bernanke at The (Very Well) Fed, and Geithner at The Treasury.
And it was this kind of corruption that arranged for JP Morgan to buy up Washington Mutual and Bear Stearns at near bargain basement prices following some well placed rumours of insolvency etc.
But how does JP Morgan evaluate its vast multitude of derivatives?
I am aware of at least one attempt by JP Morgan to use the high performance computing (HPC) facilities at Edinburgh University for derivative evaluation when those HPC facilities are supposed to be used for scientific research by the UK scientific community. How this was allowed to occur I don't know.
But the following article reports that JP Morgan has taken a great interest in HPC. The aforementioned experiment at Edinburgh used a cluster of CPUs. JP Morgan subsequently bought a few such clusters. They then looked at Graphics Processing Units (GPUs), which I am using for my research in CFD, and found them to be 15x faster than the CPU clusters. But JP Morgan have now invested heavily in Field Programmable Gate Arrays, finding that they can provide over 200x speed up over the CPU clusters. They have subsequently bought a 20% stake in the company who implemented the FPGA solution, to obviously control who gets this technology.
It remains to be seen if the underlying mathematical theory of derivatives is at fault to cause such a financial crisis, or as JP Morgan believe it was the speed at which variables can be calculated, hence the investment in FPGAs, that was at fault, or if there is just some plain old simple corruption and swindling going on, maybe all three, for JP Morgan and their Bilderberg bretheren to grow stronger. But for someone like me who is interested in the Null World Order and who also has an interest in how HPC can accelerate research and applications, this report is very interesting indeed.
===================================
From http://www.hpcwire.com/hpcwire/2011-07-13/jp_morgan_buys_into_fpga_supercomputing.html
July 13, 2011
JP Morgan Buys Into FPGA Supercomputing
Michael Feldman
One of the largest financial institutions in the world is using FPGA-based supercomputing for analyzing some of its largest and most complex credit derivative portfolios. JP Morgan, along with Maxeler Technologies, has built and deployed a state-of-the art HPC system capable of number-crunching the company's collateralized debt obligation (CDO) portfolio in near real-time.
CDOs are instruments in which the credit assets are divided into different bundles or tranches, according to their relative risk of default. During the credit crisis of 2007-2008, CDO valuation tanked as the value of the underlying assets, mostly mortgages, fell off a cliff. Part of the problem was that many of the computer models didn't assess the risk parameters of the various mortgages correctly. The less obvious aspect was that these instruments were so complex that it was difficult for the models using traditional computer technology to analyze these portfolios effectively.
With the credit crisis in full swing in 2008, Stephen Weston joined JP Morgan's London office, heading up a team devoted to making the company's financial algorithms and models run more effectively. In what started out as a blue-sky technology project almost three years ago, Weston's group has implemented a production-ready solution that speeds up the company's CDO risk models by a factor of more than 130. "This, to us, is a step change," said Weston, talking about the project during a presentation at Stanford University in May.
Execution time was the critical factor. Prior to the FPGA solution, JP Morgan's main risk model for analyzing their CDO portfolio took 8 to 12 hours to complete -- an overnight run requiring a cluster of thousands of x86 cores. If the model failed to execute correctly, there was no time to resubmit the application for that day. Worse yet, the credit risks and valuation are in constant flux. That snapshot of the previous day may no longer be useful. "It was a bit like driving your car on the freeway at 90 miles per hour by looking in the rear view mirror," said Weston. "It could be fun, but there's a high probability it could be a destructive activity."
With the speedup, the same risk model took four minutes, with the FPGA processing eating up just 12 seconds of that. It's not just that they could run the models faster though. The better performance allowed them to run multiple trading/risk scenarios throughout the day. So traders can evaluate more scenarios using different combinations of default criteria. In a nutshell, the time compression allowed JP Morgan to get a better handle on the risk profile of their CDO assets.
In general, porting legacy applications like these financial risk models to FPGAs is no small task. Programming them with low-level VHDL, the traditional programming language of FPGAs, is time-consuming, tedious, and generally unsuited for application developers. Weston knew that it would be a tough sell to convince the quants and management types at the company that this could be a viable solution for a production environment.
In fact, initially JP Morgan looked at GPUs for acceleration. They ported one of their models to the graphics architecture and were able to get a 14- to 15-fold performance boost. But they thought they could do even better with FPGAs. The problem was that it was going to take about 6 months for an initial port. That's when they went to Maxeler and initiated a proof-of-concept engagement with them.
Maxeler is a London-based technology vendor specializing in FPGA acceleration for high performance computing applications. Unlike most FPGA vendors though, Maxeler offers a vertically integrated solution: hardware, high-level compilers (Java), runtime support, development tools, and FPGA porting expertise. As such, the company is able to meet application programmers on their own turf and help them navigate the eccentricities of FPGA software development. At least, that's Maxeler's pitch.
With JP Morgan, it all seemed to work. With Maxeler's help, Weston's group was able to port the time-critical, compute-intensive pieces of their C++ risk model (the Copula and Convoluter kernels, in particular) to the FPGA platform in about 3 months. The end result was something Weston felt was sustainable for their production environment.
Part of the effort to port to risk model involved redesigning the original C++ code, which was chock full of templates and objects. Those languages structures are great for application abstraction, said Weston, but they effectively kill parallelism, and thus performance. So the first phase of the code migrations was to remove all uses of classes, templates, and other C++ abstractions that got in the way of parallelization.
With the lower level code exposed, it became much simpler to tease out the parallelism that could be exploited by the FPGAs. In this case, the flattened C++ source was ported to Java, which the Maxeler compiler is able to convert to VHDL.
Hardware-wise, the final target system is a 40-node hybrid HPC cluster from Maxeler. Each node houses eight Xeon cores hooked up to two Xilinx Virtex-5 (SX240T) FPGAs via PCIe links. Memory is split between the CPU (24GB) and the two FPGAs (12 GB each). Two terabytes of hard disk storage are hung off an Ethernet connection.
The advantage of the FPGA is that it is built for parallelism and allow the application to be intimately mapped onto the hardware. The devices are especially suited to applications that can exploit fine-grained parallelism and very deep pipelines. Unlike linear computations on fast CPUs (~2.6 GHz), parallel computation on slower FPGAs (~200 MHz) can yield many more calculations per watt. As Weston put it, "We went from computing in time to computing in space."
Right now the company is in the final stages of the project to integrate it with the rest of their production infrastructure. They are also looking to move the technology into other areas of their business like FX trading and high frequency trading, and in some cases are seeing even better performance improvements. Their Monte Carlo model, for example, was able to realize a 260- to 280-fold speedup using FPGA acceleration.
Apparently JP Morgan feels bullish enough about the technology to warrant a direct investment. In March, they acquired a 20 percent stake in Maxeler for an undisclosed amount. Although the investment is probably just a rounding error for the financial giant, it signals company's interest in making sure Maxeler's intellectual assets are intact.
There is certainly plenty of room to expand the Maxeler footprint at JP Morgan. To run all aspects of their financial business, the company currently has 14 thousand applications running on 50 thousand servers spread across more than 42 datacenters worldwide. Only a fraction of those applications will be amenable to acceleration, but each one has the potential to raise the company's bottom line.
"If we can compress the space, the time and the energy required to do these calculations, then it has hard business value for us," noted Weston. "It gives us, ultimately, a competitive edge."
Copyright © 1994-2011 Tabor Communications, Inc. All Rights Reserved.
No comments:
Post a Comment