

[ad_1]
For oil and gas companies looking to drill wells in new areas, the problem becomes a revenue-to-cost issue. The goal is very simple. Minimize the number of wells that draw the most oil or gas from underground reservoirs for the longest time. The more wells you install, the higher the cost and the greater the impact on the environment.
However, finding the right well placement can quickly become a very complex math problem. If too few wells are placed in the wrong place, a lot of resources will remain on the ground. Bringing too many wells too close not only increases costs significantly, but can also lead to wells being pumped from the same area.
Shahram Farhadi knows how complex the challenge is. Farhadi is Chief Technology Officer for Industrial AI at Beyond Limits, a startup spun off by the California Institute of Technology and NASA’s Jet Propulsion Laboratory to commercialize technologies built for space exploration in industrial environments. Founded in 2014, the company aims to leverage cognitive AI, machine learning, deep learning Technologies in industries such as oil and gas, the Internet of Things (IoT) for manufacturing and industrial goods, electricity and natural resources, healthcare and other evolving markets. Many of these are already using the HPC environment to run the most complex programs.
Placing a well in a reservoir is one of the problems that involves a series of decision-making processes that change and grow with each decision. Farhadi states that in chess there are nearly 5 million possible moves after the first five have taken place.For Go, the number goes from 10 to 12.NS power. When optimizing the placement of wells in small reservoirs, there can be 10 to 20 from the location and timing of drilling to the number of wells for producers and injectors.NS After selecting the vertical drilling position non-commutative five times in a row, power the possible combinations.
Combination of Advanced AI framework With HPC, you can significantly reduce your challenges.
“Everything AI can learn and apply to a problem, such as the basic rules for how well to separate wells, will help reduce the number of calculations and make it more concrete,” Farhadi said. say. Next platform..
Where to place wells has been a challenge for oil and gas companies for years. Meanwhile, they have developed seismic imaging capabilities and simulation models performed on HPC systems that describe underground reservoirs. You can also use the optimizer to perform model variations to determine what type of wells to place, how many, and where. According to Farhadi, there are at least two generations of engineers working to complete these equations and their nuances and to tune and learn the data.
The problem is that we are working on these calculations using a combination of brute force and optimizations such as particle swarm optimization and genetic algorithms on computationally expensive reservoir simulators to further address these complex problems. It’s making it difficult. That’s where Beyond Limits’ advanced AI framework comes in.
“The industry has really good simulations and could create opportunities for high-performance AI. How about using simulations to generate data and learn from the generated data?” He said. I will. “In that sense, you’re on some good miles. Other industries, like the automotive industry, are doing this. This is happening more or less. But from the energy industry’s point of view. , These simulations are quite abundant. “
Beyond Limits applies techniques such as Deep Reinforcement Learning (DRL) and uses the framework to train reinforcement learning agents and make optimal sequential recommendations for well placement. It also works with reservoir simulation and a new deep convolutional neural network. The agent captures the data and learns from the various iterations of the simulator, reducing the number of possible combinations of movement after each decision is made. By remembering what we have learned from previous iterations, the system can quickly narrow down the choices to one of the best answers.
“One of the areas we paid particular attention to was the simulation of the underground motion of fluids,” says Farhadi. “Think of a block of rock with oil somewhere. And there’s water coming in, and when you take this hydrocarbon out, this whole dynamic change happens. Things start. The water may break through, but what’s happening there is a very delicate process. Due to the limited information, creating this image takes a lot of time, but creating the image Let’s say you created a simulator. When you instruct this simulator, “I want to place a well here.” [and] Here you can say, “The simulator evolves this in time, provides the flow rate, and” if you do this, you will get this. ” If I manage this asset now, the question for me is exactly that:’How many wells do you put in this?What kind of well do you want to put vertically [and] Horizontal? Do you inject water from the beginning? Do you want to inject gas? This is basically a reservoir engineering expertise. It’s playing a game of how to optimally extract this natural resource from these assets, and the assets are usually worth billions of dollars. This is an invaluable asset for companies that produce oil and gas. The question is now how to get the maximum. “
The goal is to reach a high net present value (NPV) score. It’s basically the amount of oil or gas that is recovered (and sold) and the amount that you get after calculating the cost. The smallest well required for extraction Most resources mean more profit.
“The NPV initially makes some iterations, but after about 150,000 interactions with the simulator, it can reach $ 40 million in NPV,” he says. “The important thing here is that the simulation itself can be expensive to run, so it should be optimized, smart and efficient to use.”
This included creating a system that allowed Beyond Limits to scale models most efficiently where oil and gas companies needed them. The company tested using three systems. Two of them were CPUs only and the other was a hybrid running CPUs and GPUs. Beyond Limits used an on-premises 20-core CPU system running the Intel Core i9-7900X chip, a cloud-based 96-core CPU system with the same processor, and a 20-core CPU and two hybrid setups. Nvidia “Ampere” A100 GPU Accelerator p4d.24xlarge Amazon Web Services instance.
The company also went one step further by including a 36-hour run on a p4d.24xlarge AWS instance with a setup with 90 CPU cores and 8 A100 GPUs.
The benchmarked metrics were about the instantaneous speed of reinforcement learning calculations, the number of ongoing episodes and forward action searches for reinforcement learning, and the value of the best solution found in terms of NPV.
What Beyond Limits found was that the hybrid setup was better than both CPU-only systems. In terms of benchmarking, the hybrid setup peaked at 184.3 percent on 96-core systems and 1,169.5 percent on 20-core operations. To reach the same number of actions investigated at the end of 120,000 seconds, the CPU-GPU hybrid improved the elapsed time by 245.4% on 20 CPU cores and 152.9% on 96 CPU cores. (See graph below.) For NPV, the hybrid instance had a boost of about 109% compared to a vertical well 20-core CPU setup.
Calculating the number and type of wells used not only increases costs, but also increases the need for calculations, so scale and efficiency are important when trying to reach the optimal NPV.
“This problem is very complicated in terms of the number of possible combinations, so the more hardware you put in, the higher it gets and the obvious physical limitations,” says Farhadi. “GPU is a true value added … because we are now able to achieve higher NPV. You can have more flops just because you were able to have higher grades. Or you can calculate more. You are more likely to find a better configuration. The idea here is that there is this technology that helps with advanced combination simulation-based optimization called reinforcement learning. It was to show. We benchmarked this with a simple, small reservoir model, but if we were to bring it into a real field model with this number of cells, it would be like a large, high-performance training system in its own right. It will work. “
Beyond Limits is also building advanced AI systems for other industries. One example is a system designed to assist in refinery planning. Another AI system helps chemists build engine oil and other lubricant formulations faster and more efficiently, he says.
“You rely on human experts to come up with a framework, [to] To solve a problem, it’s important that the system you build respects it and is able to digest it, “says Farhadi. “It’s not just data, it’s also the knowledge of being human. How do you incorporate and organize this? For example, how do engineers create the knowledge they’ve learned from data, or as an AI constraint? How do you use physics? It’s an interesting field, even at the forefront of deep learning [and] Machine learning, which is currently under consideration. Let’s not just look at the pixels, but see if we can more robustly represent the hierarchical understanding of the incoming objects. We actually started this way before 2014, as one of the big motivations was that the industry we went to needed it. That was what they had and probably needed to augment it with a digital assistant. It contained data elements, but wasn’t quite capable. “