中国科学院数学与系统科学研究院期刊网
期刊首页 在线期刊 专题

专题

Special Topic on “Solving Differential Equations with Deep Learning”
This special issue focuses on the DL methods and applications in some linear/nonlinear differential models.
Please wait a minute...
  • 全选
    |
  • XIAO Shanshan, CHEN Mengyi, ZHANG Ruili, TANG Yifa
    系统科学与复杂性(英文). 2024, 37(2): 441-462. https://doi.org/10.1007/s11424-024-3252-7
    In this paper, the authors propose a neural network architecture designed specifically for a class of Birkhoffian systems — The Newtonian system. The proposed model utilizes recurrent neural networks (RNNs) and is based on a mathematical framework that ensures the preservation of the Birkhoffian structure. The authors demonstrate the effectiveness of the proposed model on a variety of problems for which preserving the Birkhoffian structure is important, including the linear damped oscillator, the Van der Pol equation, and a high-dimensional example. Compared with the unstructured baseline models, the Newtonian neural network (NNN) is more data efficient, and exhibits superior generalization ability.
  • YAN Zhenya
    系统科学与复杂性(英文). 2024, 37(2): 389-390. https://doi.org/10.1007/s11424-024-4002-6
  • GUO Yixiao, MING Pingbing
    系统科学与复杂性(英文). 2024, 37(2): 391-412. https://doi.org/10.1007/s11424-024-3250-9
    The authors present a novel deep learning method for computing eigenvalues of the fractional Schrödinger operator. The proposed approach combines a newly developed loss function with an innovative neural network architecture that incorporates prior knowledge of the problem. These improvements enable the proposed method to handle both high-dimensional problems and problems posed on irregular bounded domains. The authors successfully compute up to the first 30 eigenvalues for various fractional Schrödinger operators. As an application, the authors share a conjecture to the fractional order isospectral problem that has not yet been studied.
  • CHEN Fukai, LIU Ziyang, LIN Guochang, CHEN Junqing, SHI Zuoqiang
    系统科学与复杂性(英文). 2024, 37(2): 413-440. https://doi.org/10.1007/s11424-024-3294-x
    In this paper, the authors propose Neumann series neural operator (NSNO) to learn the solution operator of Helmholtz equation from inhomogeneity coefficients and source terms to solutions. Helmholtz equation is a crucial partial differential equation (PDE) with applications in various scientific and engineering fields. However, efficient solver of Helmholtz equation is still a big challenge especially in the case of high wavenumber. Recently, deep learning has shown great potential in solving PDEs especially in learning solution operators. Inspired by Neumann series in Helmholtz equation, the authors design a novel network architecture in which U-Net is embedded inside to capture the multiscale feature. Extensive experiments show that the proposed NSNO significantly outperforms the state-ofthe-art FNO with at least 60% lower relative L2-error, especially in the large wavenumber case, and has 50% lower computational cost and less data requirement. Moreover, NSNO can be used as the surrogate model in inverse scattering problems. Numerical tests show that NSNO is able to give comparable results with traditional finite difference forward solver while the computational cost is reduced tremendously.
  • WANG Zhen, CUI Shikun
    系统科学与复杂性(英文). 2024, 37(2): 463-479. https://doi.org/10.1007/s11424-024-3337-3
    The soliton resolution conjecture proposes that the initial value problem can evolve into a dispersion part and a soliton part. However, the problem of determining the number of solitons that form in a given initial profile remains unsolved, except for a few specific cases. In this paper, the authors use the deep learning method to predict the number of solitons in a given initial value of the Korteweg-de Vries (KdV) equation. By leveraging the analytical relationship between Asech2(x) initial values and the number of solitons, the authors train a Convolutional Neural Network (CNN) that can accurately identify the soliton count from spatio-temporal data. The trained neural network is capable of predicting the number of solitons with other given initial values without any additional assistance. Through extensive calculations, the authors demonstrate the effectiveness and high performance of the proposed method.
  • SUN Jiuyun, DONG Huanhe, FANG Yong
    系统科学与复杂性(英文). 2024, 37(2): 480-493. https://doi.org/10.1007/s11424-024-3349-z
    In this paper, physics-informed liquid networks (PILNs) are proposed based on liquid timeconstant networks (LTC) for solving nonlinear partial differential equations (PDEs). In this approach, the network state is controlled via ordinary differential equations (ODEs). The significant advantage is that neurons controlled by ODEs are more expressive compared to simple activation functions. In addition, the PILNs use difference schemes instead of automatic differentiation to construct the residuals of PDEs, which avoid information loss in the neighborhood of sampling points. As this method draws on both the traveling wave method and physics-informed neural networks (PINNs), it has a better physical interpretation. Finally, the KdV equation and the nonlinear Schrödinger equation are solved to test the generalization ability of the PILNs. To the best of the authors’ knowledge, this is the first deep learning method that uses ODEs to simulate the numerical solutions of PDEs.
  • LIU Haiyi, ZHANG Yabin, WANG Lei
    系统科学与复杂性(英文). 2024, 37(2): 494-510. https://doi.org/10.1007/s11424-024-3321-y
    Recently, the physics-informed neural network shows remarkable ability in the context of solving the low-dimensional nonlinear partial differential equations. However, for some cases of highdimensional systems, such technique may be time-consuming and inaccurate. In this paper, the authors put forward a pre-training physics-informed neural network with mixed sampling (pPINN) to address these issues. Just based on the initial and boundary conditions, the authors design the pre-training stage to filter out the set of the misfitting points, which is regarded as part of the training points in the next stage. The authors further take the parameters of the neural network in Stage 1 as the initialization in Stage 2. The advantage of the proposed approach is that it takes less time to transfer the valuable information from the first stage to the second one to improve the calculation accuracy, especially for the high-dimensional systems. To verify the performance of the pPINN algorithm, the authors first focus on the growing-and-decaying mode of line rogue wave in the Davey-Stewartson I equation. Another case is the accelerated motion of lump in the inhomogeneous Kadomtsev-Petviashvili equation, which admits a more complex evolution than the uniform equation. The exact solution provides a perfect sample for data experiments, and can also be used as a reference frame to identify the performance of the algorithm. The experiments confirm that the pPINN algorithm can improve the prediction accuracy and training efficiency well, and reduce the training time to a large extent for simulating nonlinear waves of high-dimensional equations.
  • ZHOU Huijuan
    系统科学与复杂性(英文). 2024, 37(2): 511-544. https://doi.org/10.1007/s11424-024-3467-7
    This paper mainly introduces the parallel physics-informed neural networks (PPINNs) method with regularization strategies to solve the data-driven forward-inverse problems of the variable coefficient modified Korteweg-de Vries (VC-MKdV) equation. For the forward problem of the VCMKdV equation, the authors use the traditional PINN method to obtain satisfactory data-driven soliton solutions and provide a detailed analysis of the impact of network width and depth on solving accuracy and speed. Furthermore, the author finds that the traditional PINN method outperforms the one with locally adaptive activation functions in solving the data-driven forward problems of the VC-MKdV equation. As for the data-driven inverse problem of the VC-MKdV equation, the author introduces a parallel neural networks to separately train the solution function and coefficient function, successfully addressing the function discovery problem of the VC-MKdV equation. To further enhance the network’s generalization ability and noise robustness, the author incorporates two regularization strategies into the PPINNs. An amount of numerical experimental data in this paper demonstrates that the PPINNs method can effectively address the function discovery problem of the VC-MKdV equation, and the inclusion of appropriate regularization strategies in the PPINNs can improves its performance.
  • SUN Junchao, CHEN Yong, TANG Xiaoyan
    系统科学与复杂性(英文). 2024, 37(2): 545-566. https://doi.org/10.1007/s11424-024-3500-x
    The multiple patterns of internal solitary wave interactions (ISWI) are a complex oceanic phenomenon. Satellite remote sensing techniques indirectly detect these ISWI, but do not provide information on their detailed structure and dynamics. Recently, the authors considered a three-layer fluid with shear flow and developed a (2+1) Kadomtsev-Petviashvili (KP) model that is capable of describing five types of oceanic ISWI, including O-type, P-type, TO-type, TP-type, and Y-shaped. Deep learning models, particularly physics-informed neural networks (PINN), are widely used in the field of fluids and internal solitary waves. However, the authors find that the amplitude of internal solitary waves is much smaller than the wavelength and the ISWI occur at relatively large spatial scales, and these characteristics lead to an imbalance in the loss function of the PINN model. To solve this problem, the authors introduce two weighted loss function methods, the fixed weighing and the adaptive weighting methods, to improve the PINN model. This successfully simulated the detailed structure and dynamics of ISWI, with simulation results corresponding to the satellite images. In particular, the adaptive weighting method can automatically update the weights of different terms in the loss function and outperforms the fixed weighting method in terms of generalization ability.