Search This Blog

Monday, April 17, 2023

FSD Beta v11.4 Release Notes Leaked, Giga Mexico Timeline, Volkswagen & ...

'Bloomberg Technology' Full Show (04/17/2023)

Meet the Autonomous Lab of the Future

 Berkeley Lab News Release:


Robots operate instruments and artificial intelligence makes decisions to find useful new materials at the A-Lab
LAUREN BIRON | (510) 621-9370 | APRIL 17, 2023
Berkeley Lab researcher Yan Zeng looks over the starting point at A-Lab. The new lab combines automation and artificial intelligence to speed up materials science discovery. (Credit: Marilyn Sargent/Berkeley Lab)
To accelerate development of useful new materials, researchers are building a new kind of automated lab that uses robots guided by artificial intelligence.
 
“Our vision is using AI to discover the materials of the future,” said Yan Zeng, a staff scientist leading the A-Lab at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab). The “A” in A-Lab is deliberately ambiguous, standing for artificial intelligence (AI), automated, accelerated, and abstracted, among others.

Scientists have computationally predicted hundreds of thousands of novel materials that could be promising for new technologies – but testing to see whether any of those materials can be made in reality is a slow process. Enter A-Lab, which can process 50 to 100 times as many samples as a human every day and use AI to quickly pursue promising finds. 

A-Lab could help identify and fast-track materials for several research areas, such as solar cells, fuel cells, thermoelectrics (materials that generate energy from temperature differences), and other clean energy technologies. To start, researchers will focus on finding new materials for batteries and energy storage, addressing critical needs for an affordable, equitable, and sustainable energy supply.

Once a target material is selected – by human researchers or their AI agents – a series of robots carry out the steps at A-Lab to synthesize it:

The first robot weighs and mixes different combinations of starting ingredients known as powder precursors. The robot can choose from nearly 200 precursors, including different metal oxides containing elements such as lithium, iron, copper, manganese, and nickel. After mixing the powders with solvent to evenly distribute them, the robot moves the slurry into crucibles.

The next robotic arm loads the crucibles into furnaces that can reach 2200 degrees Fahrenheit and inject various mixtures of gases, such as nitrogen, hydrogen, oxygen, and air. This allows the ingredients to bake in different environments and take on different properties. The AI system determines what temperature the samples should bake at, and for how long.

After the robot removes the baked crucibles, it must extract the new material. An automated machine modeled on a gumball dispenser adds a ball bearing to the cup. Intense shaking grinds the new substance into a fine powder that the robot loads onto a slide.

The final robotic arm moves the samples into two automated machines for analysis. The X-ray diffractometer determines whether one or more new chemicals have been formed, and how much of the initial ingredients are left over. The automated electron microscope does further shape and chemical analysis. Both tools send their results back to the AI system. 

Guided by artificial intelligence, the cycle adjusts and begins again. The AI at the heart of the system sets new starting combinations and amounts of precursors and instructions for the furnaces. Researchers keep an eye on the system through video feeds and alerts that can flag successes, like if a sample comes back with a desired result, or if a robot encounters an error.
Robots and AI hunt for new materials at A-Lab
“Some people might compare our setup with manufacturing, where automation has been used for a long time,” Zeng said. “What I think is exciting here is we’ve adapted to a research environment, where we never know the outcome until the material is produced. The whole setup is adaptive, so it can handle the changing research environment as opposed to always doing the same thing.”

The system at A-Lab is designed as a “closed-loop,” where decision making is handled without human interference. The robots operate around the clock, freeing researchers up to spend more time designing experiments.

“We see this as a new way of doing research,” said Gerd Ceder, the principal investigator for A-Lab. In many ways, Ceder noted, lab research has been the same for the last 70 years: the equipment may have gotten better, but ultimately a person is needed to take measurements, analyze results, and decide what to do next.

“We need materials solutions for things like the climate crisis that we can build and deploy now, because we can’t wait – so we’re trying to break this cycle that is so slow by having machines that correct themselves,” Ceder said. “The important thing is not working in parallel, but instead to iterate rapidly, the way scientists operate. We want the system to try something, analyze the data, and then decide what to do next to get closer to the goal.”

A-lab is thought to be the first fully automated lab that uses inorganic powders as the starting ingredients. This “solid-state synthesis” is a more difficult task than automating processes that use liquids, which can be easily dispensed with pumps and valves. But the extra effort comes with a big payoff.

“Our solid-state synthesis is more realistic, can incorporate a wider variety of materials, and can make larger quantities of materials,” Ceder said. “You can produce quantities that are ready for application, not just science exploration. It’s ready to scale.”

A-Lab researchers had to adapt both hardware and software for the robots, furnaces, and analysis tools, getting them to perform certain actions and talk to the central hub controlled by the AI. In some cases, such as the shaker to remove the newly baked material, they had to build a new solution entirely from scratch.

As the automated system creates and analyzes samples, the data will flow back to both A-Lab researchers as well as data repositories such as the Materials Project. Scientists are also building out integrations with other projects, such as MaterialSynthesis.org, and leveraging x-rays from Berkeley Lab’s powerful synchrotron, the Advanced Light Source.

“You can imagine the power of a lab that autonomously starts with predictions, requests data and computations to get the information it needs, and then proceeds,” Zeng said. “As A-Lab tests materials, we’re going to learn the gap between our computations and reality. That will not only give us a handful of useful new materials, but also train our models to make better predictions that can guide future science.”

Work on A-Lab began in 2020, and the project later received funding from the DOE’s Office of Science and Laboratory Directed Research & Development (LDRD) Program, which encourages innovative ideas and experiments. Zeng and a team of 10 students and postdocs began building out the lab in earnest at the start of 2022 and installed the final piece a little over one year later.

A-Lab began operating in February and has already synthesized several novel materials in collaboration with the Materials Project. Researchers are currently fine-tuning the system while continuing to add features. These include robots that can restock supplies and change precursors, synthesis instruments that let them mix and heat liquids, and additional equipment to analyze newly created materials.
###

You Can Smash This Coffee Cup and Leave It on the Ground (It's Made from...

Big Ideas 2023 | Electric Vehicles: Defying The Skeptics With Exponentia...

Thursday, April 13, 2023

Amazon releases BedRock to help businesses build their own A.I. tools

Generative A.I. is creating custom advertisements for marketing brands

Palantir CEO: Real value in A.I. will be intersection of business ethics...

Amazon CEO Andy Jassy on jumping into the generative A.I. race with new ...

Asus ROG Phone 7 Ultimate: Gaming Phone Gets Souped Up

Big Ideas 2023 | Bitcoin: A Durable Network

The Moms Pushing for Safer Social Media Laws: What's Different This Time...

'Bloomberg Technology' Full Show (04/12/2023)

Monday, April 10, 2023

Tesla Energy Set to Boom, Price Cuts Continue, 4680 Model Y Orders Open Up

Five Ways QSA is Advancing Quantum Computing

 Berkeley Lab News Release:


Since its launch in 2020, the Quantum Systems Accelerator has enabled major progress in quantum information science – including record-setting sensors, smarter algorithms, and demonstrating a 256-atom quantum device can deliver science results
LAUREN BIRON | (510) 621-9370 | APRIL 10, 2023
A new impact report from the Quantum Systems Accelerator highlights five of the many advances the center has made since its launch in 2020. (Credit: Jenny Nuss/Berkeley Lab)
Quantum computers could someday perform certain calculations faster than classical computers, with applications in science, medicine, security, finance, and beyond – but first, researchers need to improve the underlying science and technology. Since its launch in 2020, the Quantum Systems Accelerator (QSA) has already made major advances in both hardware and programming, improving the quantum tools that researchers hope will help solve some of humanity’s biggest questions.

QSA is one of the Department of Energy’s five national quantum information science research centers with a focus on all three major technologies for quantum computing: superconducting circuits, trapped-ion systems, and neutral atoms. 

“We believe there are synergies between these three big technologies and that each one may have unique abilities and applications for solving different kinds of problems,” said Rick Muller, the director of QSA and a senior manager at Sandia National Laboratories. “By looking at all three of them together, we can more easily find their strengths, apply innovations across technologies, and design a path forward to a universal quantum computer.”

Led by Lawrence Berkeley National Laboratory (Berkeley Lab), QSA brings together more than 250 experts from 14 other institutions: Sandia National Laboratories, University of Colorado Boulder, MIT Lincoln Laboratory, Caltech, Duke University, Harvard University, Massachusetts Institute of Technology, Tufts University, UC Berkeley, University of Maryland, University of New Mexico, University of Southern California, University of Texas at Austin, and Canada's Université de Sherbrooke.

Together, QSA researchers are developing ways to better control qubits (the building blocks of quantum computers), finding algorithms and applications for current and emerging quantum information systems, and speeding their transfer to industry. QSA is also preparing the next generation of quantum scientists through activities, including peer mentoring programs, career fairs, and training for high school students and teachers.

“We’re catalyzing national leadership in quantum information through co-design of quantum devices, algorithms, and engineering solutions, with the goal of delivering quantum advantage,” said Bert de Jong, the deputy director of QSA and a senior scientist at Berkeley Lab. “We’re advancing imperfect quantum technologies and figuring out how we in academia and the national laboratories working with our partners in industry can start using them today. At the same time, we’re preparing scientists to use them to solve big science questions.”

In March, the Quantum Systems Accelerator issued a full impact report on advances made since the center launched in 2020. Here are five highlights achieved by QSA scientists and partners so far:

Studied quantum magnetism and matter with a 256-atom computer (assembled using laser beams)

QSA researchers from Harvard University and MIT used a special quantum device to observe several exotic states of matter for the first time and studied magnetism at the quantum level. Their findings help explain the physics underlying materials’ properties and could be used to engineer exotic materials of the future. Their research was performed using a “programmable quantum simulator” similar to a quantum computer. The team at Harvard built the simulator using hundreds of laser beams known as “optical tweezers,” arranging 256 ultra-cold rubidium atoms that acted as qubits. By some measures, that makes it the largest programmable quantum processor demonstrated to date. By moving the atoms into shapes such as squares, honeycombs, and triangles, QSA scientists manipulated how the qubits would interact with one another and made important measurements of quantum phases of matter and quantum spin liquids.

Stacked qubit layers on microchips to help computers grow

One way to build a useful quantum computer is by connecting qubits with superconducting circuits, which can conduct electricity without energy loss when extremely cold. But with every qubit added, engineering the connections and electronics becomes more difficult. You can imagine a group of qubits spread out like a grid on a piece of paper; trying to snake connections to the innermost qubits causes crowding that can degrade the qubits or signals. To address the challenge, scientists at MIT and MIT Lincoln Laboratory are taking inspiration from commercial electronics and investigating qubits with layers. These stacks of electronic chips reroute the connections to attach vertically, as though perpendicular to our grid – a kind of “3D integration.” The change allows researchers to potentially connect, control, and read larger numbers of qubits. Through funding from QSA and other partners, they’ve already built and tested a “2-stack” qubit chip (with two layers), and QSA researchers are working on further enhanced versions. This milestone is an important step toward more densely packed qubits that can perform more complex calculations.

Made a record-setting quantum sensor that can be used to hunt dark matter

Any study that uses electronics is limited by random variations or noise that can hide the information researchers are searching for. Quantum systems, such as arrays of ultracold atoms, can be used to make extremely precise measurements that are better at picking the signal from the noise. Led by the University of Colorado Boulder, QSA researchers built a quantum sensor from 150 beryllium ions (atoms with an electric charge) arranged in a flat crystal. By using entangled particles, where a change in one immediately impacts the other, the quantum sensor measured electric fields with more than 10 times the sensitivity of any previously demonstrated atomic sensor. Picking up incredibly tiny changes makes such a sensor a powerful tool that could potentially enhance gravitational wave detectors or look for dark matter, one of the biggest mysteries in modern physics.

Harnessed machine learning to correct errors in real time

To improve quantum computers, researchers need a way to find and correct errors, such as a qubit randomly flipping between 0 and 1. Methods such as continuous quantum error correction (CQEC) keep an eye on qubits and look for telltale signs of problems – but they too are subject to noise that can hide issues. QSA researchers at UC Berkeley designed a machine learning algorithm that can process the CQEC signals and find errors more accurately than current real-time methods. Because the new algorithm is flexible, learns on the job, and requires small amounts of computing power, it could improve continuous error correction systems and support larger and more stable quantum computers.

Designed a simpler way to link up qubits

Our everyday computers use circuits with logic gates (such as “AND,” “OR,” and “NOT”) to perform operations. Quantum circuits can also use gates as their building blocks – but instead of devices like transistors, their gates are made of qubits and interactions between qubits. While one or two entangled qubits can be used for basic operations, linking together many qubits can speed up computations, simplify quantum circuits, and make computers more powerful. QSA researchers led by Duke University developed a new, one-step method of creating these more versatile gates with multiple entangled qubits. Their technique expands logic operations for quantum computers, and includes a particular kind of gate (known as an N-Toffoli gate) that experts predict will be important in quantum adders, multipliers, and other algorithms – including ones with applications in cryptography.
###

'Bloomberg Technology' Full Show (04/10/2023)

Eclipse Bets on Real-World Startups

What TSMC's Sales Miss Signals for Electronics Demand

Apple Leads PC Shipment Plunge With 40% Drop

AI will further increase efficiency for companies, says Evercore ISI's M...

Thursday, April 6, 2023

Private Markets' Appetite for Tech

Online Publisher Substack Turns to Its Writers for Funding

Going Viral: AI Disinformation

TechCheck Weekly #5: Sports Rights Sleeper Hold

Australia ends Binance derivatives license, and Ethereum preps next big ...

Roku Plus Series Review: Roku's First TV Is Good, But Not Great

Bitcoin and Ethereum outperform bank stocks since March 10

Cramer's Mad Dash on Coinbase: I would not touch this thing at all

VC Spotlight: Outrage at American Investors Backing Chinese AI Companies

Tech War With China | Bloomberg Technology 04/05/2023

What Are the Greatest Barriers for Blockchain?

Kia Debuts First All Electric SUV

U.S. lawmakers meet with big tech on China

Could Bitcoin’s Price Reach $1,000,000? With Balaji Srinivasan

Cybercrime website Genesis Market shut down in global law enforcement cr...

Wednesday, April 5, 2023

Tesla Just Unveiled New Details on Master Plan 3

Impossible to Regulate AI, You.com CEO Says

Apple Wants to Rely Less on China to Make IPhones

Big Ideas 2023 | Digital Wallets: Disintermediating Traditional Banking

Only 20 names are driving the tech equity rally, says Citi's Kristen Bit...

Semiconductors stand to make massive gains with the evolution of A.I., s...

Google chip manufacturing takes aim at Nvidia's generative A.I.

MicroStrategy buys 1,045 bitcoin, and Invest Diva explains her crypto co...

Google takes AIM at Nvidia, claims AI chips are faster, greener

Affordability will be driving factor for EV market growth, says Stellant...

We need to take time to get AI right, says Elevation's Roger McNamee

Apple Music Classical: How the App Works | Tech News Briefing | WSJ

Apple vs. Banks: The Digital-Wallet War, Explained | WSJ

Slowing down A.I. advances looks like 'mistaken effort,' says LinkedIn c...

Gene Therapy Could Cure Cancer with Professor Waseem Qasim

VC Spotlight: Opportunities With the AI Technology Shift

Internal Strife Roils Biden's Anti-Hacking Dream Team

GM Expects 50,000 EVs Sold in Q2

Talking Tech: Job Cuts, Virgin Orbit Bankruptcy

Massive Tesla Production Plan Rumors + Ford Mach E Sales Drop

Tuesday, February 21, 2023

Huge FSD Beta Updates, Semi Insights, New Battery Contract

ChatGPT Has A Serious Problem

Bing's AI Fell in Love. Now Microsoft Is Friend-Zoning It

Cybersecurity Defenders Are Expanding Their AI Toolbox

 Pacific Northwest National Laboratory News Release:


Deep reinforcement learning shows promise to stop adversaries, defend networks

SHARE

February 16, 2023

RICHLAND, Wash.—Scientists have taken a key step toward harnessing a form of artificial intelligence known as deep reinforcement learning, or DRL, to protect computer networks.

 

When faced with sophisticated cyberattacks in a rigorous simulation setting, deep reinforcement learning was effective at stopping adversaries from reaching their goals up to 95 percent of the time. The outcome offers promise for a role for autonomous AI in proactive cyber defense.

 

Scientists from the Department of Energy’s Pacific Northwest National Laboratory documented their findings in a research paper and presented their work Feb. 14 at a workshop on AI for Cybersecurity during the annual meeting of the Association for the Advancement of Artificial Intelligence in Washington, D.C.

 

The starting point was the development of a simulation environment to test multistage attack scenarios involving distinct types of adversaries. Creation of such a dynamic attack-defense simulation environment for experimentation itself is a win. The environment offers researchers a way to compare the effectiveness of different AI-based defensive methods under controlled test settings.

 

Such tools are essential for evaluating the performance of deep reinforcement learning algorithms. The method is emerging as a powerful decision-support tool for cybersecurity experts—a defense agent with the ability to learn, adapt to quickly changing circumstances, and make decisions autonomously. While other forms of artificial intelligence are standard to detect intrusions or filter spam messages, deep reinforcement learning expands defenders’ abilities to orchestrate sequential decision-making plans in their daily face-off with adversaries.

 

Deep reinforcement learning offers smarter cybersecurity, the ability to detect changes in the cyber landscape earlier, and the opportunity to take preemptive steps to scuttle a cyberattack.

 

DRL: Decisions in a broad attack space

“An effective AI agent for cybersecurity needs to sense, perceive, act and adapt, based on the information it can gather and on the results of decisions that it enacts,” said Samrat Chatterjee, a data scientist who presented the team’s work. “Deep reinforcement learning holds great potential in this space, where the number of system states and action choices can be large.”

 

DRL, which combines reinforcement learning and deep learning, is especially adept in situations where a series of decisions in a complex environment need to be made. Good decisions leading to desirable results are reinforced with a positive reward (expressed as a numeric value); bad choices leading to undesirable outcomes are discouraged via a negative cost.

 

It’s similar to how people learn many tasks. A child who does their chores might receive positive reinforcement with a desired playdate; a child who doesn’t do their work gets negative reinforcement, like the takeaway of a digital device.

 

“It’s the same concept in reinforcement learning,” Chatterjee said. “The agent can choose from a set of actions. With each action comes feedback, good or bad, that becomes part of its memory. There’s an interplay between exploring new opportunities and exploiting past experiences. The goal is to create an agent that learns to make good decisions.”

 

Open AI Gym and MITRE ATT&CK

The team used an open-source software toolkit known as Open AI Gym as a basis to create a custom and controlled simulation environment to evaluate the strengths and weaknesses of four deep reinforcement learning algorithms.

 

The team used the MITRE ATT&CK framework, developed by MITRE Corp., and incorporated seven tactics and 15 techniques deployed by three distinct adversaries. Defenders were equipped with 23 mitigation actions to try to halt or prevent the progression of an attack.

 

Stages of the attack included tactics of reconnaissance, execution, persistence, defense evasion, command and control, collection and exfiltration (when data is transferred out of the system). An attack was recorded as a win for the adversary if they successfully reached the final exfiltration stage.

 

“Our algorithms operate in a competitive environment—a contest with an adversary intent on breaching the system,” said Chatterjee. “It’s a multistage attack, where the adversary can pursue multiple attack paths that can change over time as they try to go from reconnaissance to exploitation. Our challenge is to show how defenses based on deep reinforcement learning can stop such an attack.”

Like a toddler starting to walk learns from bumps and bruises, algorithms based on deep reinforcement learning, or DRL, are trained through rewards for good decisions and penalties for bad decisions. (Photo by Daxiao Productions | Shutterstock.com)

DQN outpaces other approaches

The team trained defensive agents based on four deep reinforcement learning algorithms: DQN (Deep Q-Network) and three variations of what’s known as the actor-critic approach. The agents were trained with simulated data about cyberattacks, then tested against attacks that they had not observed in training.

 

DQN performed the best.

  • Least sophisticated attacks (based on varying levels of adversary skill and persistence): DQN stopped 79 percent of attacks midway through attack stages and 93 percent by the final stage.
  • Moderately sophisticated attacks: DQN stopped 82 percent of attacks midway and 95 percent by the final stage.
  • Most sophisticated attacks: DQN stopped 57 percent of attacks midway and 84 percent by the final stage—far higher than the other three algorithms.

“Our goal is to create an autonomous defense agent that can learn the most likely next step of an adversary, plan for it, and then respond in the best way to protect the system,” Chatterjee said.

 

Despite the progress, no one is ready to entrust cyber defense entirely up to an AI system. Instead, a DRL-based cybersecurity system would need to work in concert with humans, said coauthor Arnab Bhattacharya, formerly of PNNL.

 

“AI can be good at defending against a specific strategy but isn’t as good at understanding all the approaches an adversary might take,” Bhattacharya said. “We are nowhere near the stage where AI can replace human cyber analysts. Human feedback and guidance are important.”

 

In addition to Chatterjee and Bhattacharya, authors of the AAAI workshop paper include Mahantesh Halappanavar of PNNL and Ashutosh Dutta, a former PNNL scientist. The work was funded by DOE’s Office of Science. Some of the early work that spurred this specific research was funded by PNNL’s Mathematics for Artificial Reasoning in Science initiative through the Laboratory Directed Research and Development program.