Non-von Neumann Architectures

After three-quarters of a century of impressive progress in sequential processing, some perceive limitations in the approach defined by John von Neumann in 1945.  Although new technologies have advanced the basic model, and embraced massively parallel augmentation of acceleration, management of resources is becoming much more difficult with core counts numbering in the millions.  Many past efforts have explored this space (reduction machines, dataflow architectures, stream processing, message-passing, function programming approaches); our primary focus is in exploring emerging architectures which deploy this philosophy.

Cognitive Computing
Designers began mimicking biological approaches in order to understand the energy and parallelism naturally expressed in nature. The neuromorphic concept was developed by Carver Mead at Caltech in the late 1980s describing the use of VLSI systems containing electronic to imitate neuro-biological architectures present in the central nervous system. Recently the term neuromorphic has been used to
describe analog, digital, and mixed-mode analog/digital VLSI and software systems that implement models neural systems for perception, motor control, multisensory integration, and of most interest to LBL, cognitive computation.

Artificial neural nets are being developed and deployed for tasks involving deep learning, machine learning, image recognition, voice processing, and other suitable tasks.  The challenge is in "programming" the devices and matching the available technology to the problem at hand, which is generally done nearly manually.  

We are working with Caffe tools developed by the Berkeley Vision and Learning Center and targeting IBM's TrueNorth synaptic architecture, and also IBM's internal Corelet Programming Environment.  



Locally we have one NS1e hardware board on loan from IBM Research Almaden with a single 4,096 SyNAPSE chip supported by a dedicated server, and are 1.) exploring and mapping astrophysical image matching tasks, 2.) adapting accelerator-based high-energy particle tracking recognition for near-real time response, and 3.) evaluating the architectural approach for incorporation within conventional HPC platforms in an accelerator model.  

We are partnered with Lawrence Livermore Lab and will have future access to their 16 chip board containing 65+K neurosynaptic cores.


Future tasks could include cognitive computing based upon machine learning of scientific data characteristics.  Most new information (Big Data) is now arriving in unstructured forms such as video, images, symbols, and natural language, and this approach is an attempt to apply Artificial Intelligence approaches to process, discover, and infer patterns present in the data with minimal a priori knowledge.  Machine Learning and automated data classification are now heavily utilized for social media, and our focus is to aid in applying those mechanisms to energy particle tracks for rapid triggering, for example.



By embracing neuromorphic principles we can gain insight into the morphology of individual neurons, circuits, and overall architectures, and how this in turn creates desirable computations, affects how information is represented, influences robustness to damage, incorporates learning and development, exhibits plasticity (adaptation to local change), and facilitates evolutionary modifications.

Our engagement is threefold: we have an effort to evaluate the existing architecture from a system perspective, several research projects endeavoring to apply the 4096 core unit to research data analysis (or the larger LLNL 16 chip board if required, and finally a nascent joint HPC integration project targeting existing data-server class machines.


Comments