top of page

As the Russian soldiers spoke in the recent war, an AI was listening. Their words were automatically captured, transcribed, translated, and analyzed using several artificial intelligence algorithms developed by Primer, a US company that provides AI services for intelligence analysts. Using AI systems to surveil Russia’s army at scale shows the growing importance of sophisticated open source intelligence in military conflicts.

The tool developed by Primer also shows how valuable machine learning could be for parsing intelligence information. Primer has already sold AI algorithms trained to transcribe and translate phone calls, as well as ones that can pull out key terms or phrases. Sean Gourley, Primer’s CEO, says the company’s engineers modified these tools to carry out four new tasks: to gather audio captured from web feeds that broadcast communications captured using software that emulates radio receiver hardware; to remove noise, including background chatter and music; to transcribe and translate Russian speech; and to highlight key statements relevant to the battlefield situation. In some cases this involved retraining machine learning models to recognize colloquial terms for military vehicles or weapons.


Tapping into open source intelligence data means sifting immense quantities of information. Primer has distinguished itself for its ability to parse language. In recent years, AI has become capable of summarizing or answering questions using text thanks to a particular kind of large machine learning model known as a transformer. This type of AI model is better able to make sense of input such as a long string of words in a sentence. Transformers have yielded AI programs that are capable of generating coherent news articles or even writing computer code to perform a given task.



Scientists have created a moving magnetic slime capable of encircling smaller objects, self-healing and “very large deformation” to squeeze and travel through narrow spaces. The slime, which is controlled by magnets, is also a good electrical conductor and can be used to interconnect electrodes, its creators say.

Prof Li Zhang, of the Chinese University of Hong Kong, who co-created the slime, emphasised that the substance was real scientific research and not an April fool’s joke, despite the timing of its release. The slime contains magnetic particles so that it can be manipulated to travel, rotate, or form O and C shapes when external magnets are applied to it. The blob was described in a study published in the peer-reviewed journal Advanced Functional Materials as a “magnetic slime robot”.


The slime has “visco-elastic properties”, Zhang said, meaning that “sometimes it behaves like a solid, sometimes it behaves like a liquid”. It is made of a mixture of a polymer called polyvinyl alcohol, borax – which is widely used in cleaning products – and particles of neodymium magnet.

While the team have no immediate plans to test it in a medical setting, the scientists envisage the slime could be useful in the digestive system, for example in reducing the harm from a small swallowed battery.

The magnetic particles in the slime, however, are toxic themselves. The researchers coated the slime in a layer of silica – the main component in sand – to form a hypothetically protective layer. “The safety would also strongly depend on how long you would keep them inside of your body,” Zhang said.



Deep neural networks are increasingly helping to design microchips, predict how proteins fold, and outperform people at complex games. However, researchers have now discovered there are fundamental theoretical limits to how stable and accurate these AI systems can actually get.

In artificial neural networks, components dubbed “neurons” are fed data and cooperate to solve a problem, such as recognizing images. The neural net repeatedly adjusts the links between its neurons and sees if the resulting patterns of behavior are better at finding a solution. Over time, the network discovers which patterns are best at computing results. It then adopts these as defaults, mimicking the process of learning in human brain. A neural network is dubbed “deep” if it possesses multiple layers of neurons.


Previous research suggested there is mathematical proof that stable, accurate neural networks exist for a wide variety of problems. However, in a new study, researchers now find that although stable, accurate neural networks may theoretically exist for many problems, there may paradoxically be no algorithm that can actually successfully compute them. A digital computer can only compute certain specific neural networks, and sometimes computing a desirable neural network is impossible. It is hard to evaluate the results until implementation, which might be too late for some scenarios.

These new findings do not suggest that all neural networks are completely flawed, but that they may prove stable and accurate only in limited scenarios. These new findings regarding the limitations of neural networks are not aimed at damping artificial-intelligence research, and may instead spur new work exploring ways to bend these rules.



bottom of page