Search This Blog

Tuesday, February 21, 2023

Huge FSD Beta Updates, Semi Insights, New Battery Contract

ChatGPT Has A Serious Problem

Bing's AI Fell in Love. Now Microsoft Is Friend-Zoning It

Cybersecurity Defenders Are Expanding Their AI Toolbox

 Pacific Northwest National Laboratory News Release:


Deep reinforcement learning shows promise to stop adversaries, defend networks

SHARE

February 16, 2023

RICHLAND, Wash.—Scientists have taken a key step toward harnessing a form of artificial intelligence known as deep reinforcement learning, or DRL, to protect computer networks.

 

When faced with sophisticated cyberattacks in a rigorous simulation setting, deep reinforcement learning was effective at stopping adversaries from reaching their goals up to 95 percent of the time. The outcome offers promise for a role for autonomous AI in proactive cyber defense.

 

Scientists from the Department of Energy’s Pacific Northwest National Laboratory documented their findings in a research paper and presented their work Feb. 14 at a workshop on AI for Cybersecurity during the annual meeting of the Association for the Advancement of Artificial Intelligence in Washington, D.C.

 

The starting point was the development of a simulation environment to test multistage attack scenarios involving distinct types of adversaries. Creation of such a dynamic attack-defense simulation environment for experimentation itself is a win. The environment offers researchers a way to compare the effectiveness of different AI-based defensive methods under controlled test settings.

 

Such tools are essential for evaluating the performance of deep reinforcement learning algorithms. The method is emerging as a powerful decision-support tool for cybersecurity experts—a defense agent with the ability to learn, adapt to quickly changing circumstances, and make decisions autonomously. While other forms of artificial intelligence are standard to detect intrusions or filter spam messages, deep reinforcement learning expands defenders’ abilities to orchestrate sequential decision-making plans in their daily face-off with adversaries.

 

Deep reinforcement learning offers smarter cybersecurity, the ability to detect changes in the cyber landscape earlier, and the opportunity to take preemptive steps to scuttle a cyberattack.

 

DRL: Decisions in a broad attack space

“An effective AI agent for cybersecurity needs to sense, perceive, act and adapt, based on the information it can gather and on the results of decisions that it enacts,” said Samrat Chatterjee, a data scientist who presented the team’s work. “Deep reinforcement learning holds great potential in this space, where the number of system states and action choices can be large.”

 

DRL, which combines reinforcement learning and deep learning, is especially adept in situations where a series of decisions in a complex environment need to be made. Good decisions leading to desirable results are reinforced with a positive reward (expressed as a numeric value); bad choices leading to undesirable outcomes are discouraged via a negative cost.

 

It’s similar to how people learn many tasks. A child who does their chores might receive positive reinforcement with a desired playdate; a child who doesn’t do their work gets negative reinforcement, like the takeaway of a digital device.

 

“It’s the same concept in reinforcement learning,” Chatterjee said. “The agent can choose from a set of actions. With each action comes feedback, good or bad, that becomes part of its memory. There’s an interplay between exploring new opportunities and exploiting past experiences. The goal is to create an agent that learns to make good decisions.”

 

Open AI Gym and MITRE ATT&CK

The team used an open-source software toolkit known as Open AI Gym as a basis to create a custom and controlled simulation environment to evaluate the strengths and weaknesses of four deep reinforcement learning algorithms.

 

The team used the MITRE ATT&CK framework, developed by MITRE Corp., and incorporated seven tactics and 15 techniques deployed by three distinct adversaries. Defenders were equipped with 23 mitigation actions to try to halt or prevent the progression of an attack.

 

Stages of the attack included tactics of reconnaissance, execution, persistence, defense evasion, command and control, collection and exfiltration (when data is transferred out of the system). An attack was recorded as a win for the adversary if they successfully reached the final exfiltration stage.

 

“Our algorithms operate in a competitive environment—a contest with an adversary intent on breaching the system,” said Chatterjee. “It’s a multistage attack, where the adversary can pursue multiple attack paths that can change over time as they try to go from reconnaissance to exploitation. Our challenge is to show how defenses based on deep reinforcement learning can stop such an attack.”

Like a toddler starting to walk learns from bumps and bruises, algorithms based on deep reinforcement learning, or DRL, are trained through rewards for good decisions and penalties for bad decisions. (Photo by Daxiao Productions | Shutterstock.com)

DQN outpaces other approaches

The team trained defensive agents based on four deep reinforcement learning algorithms: DQN (Deep Q-Network) and three variations of what’s known as the actor-critic approach. The agents were trained with simulated data about cyberattacks, then tested against attacks that they had not observed in training.

 

DQN performed the best.

  • Least sophisticated attacks (based on varying levels of adversary skill and persistence): DQN stopped 79 percent of attacks midway through attack stages and 93 percent by the final stage.
  • Moderately sophisticated attacks: DQN stopped 82 percent of attacks midway and 95 percent by the final stage.
  • Most sophisticated attacks: DQN stopped 57 percent of attacks midway and 84 percent by the final stage—far higher than the other three algorithms.

“Our goal is to create an autonomous defense agent that can learn the most likely next step of an adversary, plan for it, and then respond in the best way to protect the system,” Chatterjee said.

 

Despite the progress, no one is ready to entrust cyber defense entirely up to an AI system. Instead, a DRL-based cybersecurity system would need to work in concert with humans, said coauthor Arnab Bhattacharya, formerly of PNNL.

 

“AI can be good at defending against a specific strategy but isn’t as good at understanding all the approaches an adversary might take,” Bhattacharya said. “We are nowhere near the stage where AI can replace human cyber analysts. Human feedback and guidance are important.”

 

In addition to Chatterjee and Bhattacharya, authors of the AAAI workshop paper include Mahantesh Halappanavar of PNNL and Ashutosh Dutta, a former PNNL scientist. The work was funded by DOE’s Office of Science. Some of the early work that spurred this specific research was funded by PNNL’s Mathematics for Artificial Reasoning in Science initiative through the Laboratory Directed Research and Development program.

Thursday, February 9, 2023

Tesla Gen 3 Hints, New Cybertruck Image, China Production

A.I. is our top investment play right now, says early Pinterest investor...

NTSB finds Tesla's autopilot not responsible for 2021 crash

SEC charges Kraken $30M for failure to register one of its offerings

Bitcoin slips further below $23K and judge extends SBF’s bail restrictio...

Will Crypto Cowboys Crash the Texas Grid?

Yusuf Mehdi Demoes New Bing and Microsoft Edge Experience

Google Embarrasses Themselves (A.I. War Is Heating Up)

What's Being Done to Stop Crypto Theft?

US Says China Balloon Was Part of Spying Effort

Steam Deck Docks: Valve vs. Jsaux

'Bloomberg Technology' Full Show (02/08/2023)

Tuesday, February 7, 2023

Spy Balloon | Bloomberg Technology 02/06/2023

Microscopy Images Could Lead to New Ways to Control Excitons for Quantum Computing

 Berkeley Lab News:


For the first time, scientists observe exciton quasiparticles confined in atomically thin materials, opening new paths to controlling excitons for quantum and optolectronic applications
MEDIA RELATIONS | (510) 486-5183 | FEBRUARY 7, 2023
The unit-cell averaged electron microscopy-derived composite image shows excitons in green. The moiré unit cell outlined in the lower right of the exciton map is about 8 nanometers in size. (Credit: Sandhya Susarla and Peter Ercius/Berkeley Lab)
– By Alison Hatt

Excitons are drawing attention as possible quantum bits (qubits) in tomorrow’s quantum computers and are central to optoelectronics and energy-harvesting processes. However, these charge-neutral quasiparticles, which exist in semiconductors and other materials, are notoriously difficult to confine and manipulate. Now, for the first time, researchers have created and directly observed highly localized excitons confined in simple stacks of atomically thin materials. The work confirms theoretical predictions and opens new avenues for controlling excitons with custom-built materials. 

“The idea that you can localize excitons on specific lattice sites by simply stacking these 2D materials is exciting because it has a variety of applications, from designer optoelectronic devices to materials for quantum information science,” said Archana Raja, co-lead of the project and a staff scientist at Lawrence Berkeley National Laboratory’s (Berkeley Lab) Molecular Foundry, whose group led the device fabrication and optical spectroscopy characterization. 
 
The team fabricated devices by stacking layers of tungsten disulfide (WS2) and tungsten diselenide (WSe2). A small mismatch in the spacing of atoms in the two materials gave rise to a moiré superlattice, a larger periodic pattern that arises from the overlap of two smaller patterns with similar but not identical spacing of elements. Using state-of-the-art electron microscopy tools, the researchers collected structural and spectroscopic data on the devices, combining information from hundreds of measurements to determine the probable locations of excitons. 

“We used basically all the most advanced capabilities on our most advanced microscope to do this experiment,” said Peter Ercius, who led the imaging work at the Molecular Foundry’s National Center for Electron Microscopy. “We were pushing the boundaries of everything we can do, from making the sample to analyzing the sample to doing the theory.” 

Theoretical calculations, led by Steven Louie, a faculty senior scientist at Berkeley Lab and distinguished professor of physics at UC Berkeley, revealed that large atomic reconstructions take place in the stacked materials, which modulate the electronic structure to form a periodic array of “traps” where excitons become localized. Discovery of this direct relationship between the structural changes and the localization of excitons overturns prior understanding of these systems and establishes a new approach to designing optoelectronic materials.  

The team’s findings are described in a paper published in the journal Science with postdoctoral fellows Sandhya Susarla (now a professor at Arizona State University) and Mit H. Naik as co-lead authors. Next the team will explore approaches to tuning the moiré lattice on demand and making the phenomenon more robust to material disorder. 

The Molecular Foundry is a DOE Office of Science user facility at Berkeley Lab.

The research was supported by the Department of Energy’s Office of Science.
###