After taking office in just a few days, on 22nd January, US President Donald Trump brought one of many big surprise to the market. At the White House, he announced that Open AI, Softbank, Oracle, and the UAE Sovereign Wealth Fund would come together to form a joint venture, and then co-build AI infrastructure in the US, under the project name of a science fiction movie 'Stargate'.
Initially, the joint venture will spend a capex of $100bn to roll out data centers in the US. Then, depending on certain undisclosed criteria, the total capex could hit $500bn in the coming 5 years. As a comparison, the 'Big 4' hyperscalers – Amazon, Microsoft, Alphabet, and Meta will collectively spend a total capex of $320bn in 2025F. The kick-off of the 'Stargate Project' will surely add more fuel to the already buoyant AI capex boom.
Not long after the announcement of the 'Stargate Project', it seemed that OpenAI and SoftBank decided to get tied up even more with each other. According to people with knowledge of the latest fund-raising deal of Open AI, Softbank would also participate and set to invest $40bn in the company. At the same time, the deal will also give a valuation of a whopping $250bn to Open AI. We have no details of the post-money shareholder structure yet, but surely, Softbank will become OpenAI's second largest shareholder, just behind Microsoft.
In addition, the timing of the deal also made it more interesting. Given the rise of DeepSeek, the market did hold some concerns about the future of OpenAI, especially the moat created by its first-mover advantage and its close-source model. Through this investment, SoftBank defied those concerns and cast a vote of confidence in OpenAI's future growth prospects. This may be too early to tell who is going to be the final winner in AI, but this is certain that the AI capex party will go on for the moment.
Over the past few months there has been a lot of talk in AI world about “Agentic AI”, advanced AI agents that will assist users in conducting work which can draw data from multiple systems and data-sources automatically. OpenAI is among the first to turn this vision into reality.
Following the introduction of ‘Operator’, one of OpenAI’s first Agent AI tools, on 23rd Jan, the company launched ‘Deep Research’ on 2nd Feb. Compared to Operator, Deep Research is more advanced with more autonomy. While Operator helps users gather data from numerous websites, Deep Research goes a step further by acting as a personal research assistant.
Powered by a specialized version of OpenAI’s upcoming o3 model — optimized for web browsing and data analysis — ‘Deep Research’ can search, interpret, and analyse vast amounts of text, images, and PDFs from the internet. It then generates detailed reports with quality comparable to those produced by professional research analysts. The entire process, from submitting a prompt to receiving a completed report, can take as little as 5 to 30 minutes.
However, while ‘Deep Research’ is powerful, it is not without limitations. The agent is still prone to “hallucinations,” occasionally providing inaccurate information or flawed inferences. It may also struggle to distinguish credible sources from unreliable ones, making human verification essential to ensure accuracy and reliability.
Deep Research is expected to find applications in professional fields such as finance, science, and engineering, as well as assist consumers in making informed decisions about purchases like clothing, electronics, and cars.
Currently, it is available exclusively to ChatGPT Pro users in the United States, with a limit of 100 queries per month due to high processing costs. OpenAI plans to extend access to Plus, Team, and Enterprise users in the future. However, the service is not yet available in the UK or Europe.
Google DeepMind's AlphaGeometry and AlphaProof, have shown abilities in solving complex math problems that near the very best human minds. AlphaGeometry excels in geometry, reaching a level close to a human International Mathematical Olympiad (IMO) gold medallist, while AlphaProof has achieved the same level as a silver medallist in the IMO competition.
AlphaGeometry uses a neuro-symbolic approach, combining a neural language model with a symbolic deduction engine. The language model identifies patterns and relationships in geometric data and proposes useful constructions, such as points, lines, and circles. The symbolic engine uses formal logic to make deductions and find solutions based on these suggestions. This approach allows it to tackle geometry problems requiring multiple steps. Importantly, the solutions generated are verifiable and use classical geometry rules.
AlphaProof is a reinforcement-learning system that focuses on formal math reasoning. It can translate natural language problem statements into formal statements. The system generates solution candidates and proves or disproves them using a search over possible proof steps in the formal language Lean. In a virtuous circle, the proofs are then used to reinforce the language model, enabling it to solve increasingly challenging problems.
A key aspect of these systems is their ability to train on large datasets of synthetic data, generated without human demonstrations. AlphaGeometry used a "symbolic deduction and traceback" process to create 100 million unique examples of varying difficulty.
AlphaProof trained by proving or disproving millions of problems. This approach overcomes the data bottleneck that has limited the use of formal languages in machine learning. In the 2024 IMO competition, a combined system of AlphaProof and AlphaGeometry solved four out of six problems, earning a silver medal equivalent score. This achievement marks the first time an AI system has reached this level of performance in the IMO.