From  UCLA 29/11/23

Created by Superinnovators in harmony with AI

FINDINGS

An experimental computing system, modeled after the biological brain, achieved a 93.4% accuracy in identifying handwritten numbers.

This success is attributed to a novel training algorithm providing real-time feedback during the learning process.

This algorithm surpassed the accuracy of traditional machine-learning methods, which train after processing batches of data, achieving only 91.4% accuracy.

The research highlighted that the system’s memory of past inputs, stored within itself, enhanced its learning capabilities.

In contrast, conventional computing approaches rely on separate software or hardware for memory storage, distinct from the processor.

Schematic illustration of nanowire network device setup for MNIST digit classification demonstrating online dynamical learning.Top: MNIST handwritten digit samples (N samples × 784 pixel features) are normalised and converted to 1-D temporal voltage pulse streams (each pixel occupies Δt = 0.001 s) and delivered consecutively to the nanowire multi-electrode device. Bottom left: scanning electron micrograph image of the 16-electrode device, showing source electrode (channel 0, red), drain electrode (channel 3, green), readout electrodes (channel 1, 2, 12, 13, 15, blue) and other electrodes not used (brown). Bottom right: readout voltages (i.e., N × M × 784 dynamical features) are input into an external linear classifier in which the weight matrix Wn for the M × 784 features per digit sample is updated after each sample an, with corresponding class yn as the target output (digit `5′ shown as an example of classification result). Credit: Nature

BACKGROUND

For 15 years, researchers at the California NanoSystems Institute at UCLA (CNSI) have been working on a new computational technology.

This brain-inspired system consists of a network of nanoscale wires containing silver, situated on electrodes.

The system operates by receiving and outputting data through electrical pulses.

Unlike conventional computers with fixed atomic structures in their memory and processing modules, this nanowire network physically adapts to stimuli.

The network’s memory is based on its atomic structure and is distributed throughout the system, similar to synaptic connections in a biological brain.

Collaborators at the University of Sydney developed a specialized algorithm to manage input and output, leveraging the system’s dynamic and parallel data processing abilities.

Credit: Getty

METHOD

The system was built with a material comprising silver and selenium, forming a network of entangled nanowires above 16 electrodes.

Researchers trained and tested the network using images of handwritten numbers from a standard dataset.

The images were communicated to the system pixel-by-pixel via electrical pulses, with varying voltages indicating different pixel shades.

IMPACT

This developing nanowire network is anticipated to be more energy-efficient than current silicon-based AI systems for similar tasks.

It shows potential in processing complex, time-varying data like weather or traffic patterns, which current AI finds challenging without massive data and energy resources.

The study’s co-design approach—simultaneous hardware and software development—suggests nanowire networks could complement traditional electronic devices.

With its brain-like memory and adaptive learning capabilities, the network could excel in “edge computing,” processing complex data locally without relying on distant servers.

Applications could span robotics, autonomous navigation in vehicles and drones, Internet of Things technology, health monitoring, and coordinating multi-location sensor data.

More info

Paper

You may also be curious about:

Leave a Reply

Your email address will not be published. Required fields are marked *