中国科学院数学与系统科学研究院期刊网

15 August 2025, Volume 45 Issue 8
    

  • Select all
    |
  • WANG Bo, YUAN Jiaxin, YE Xue, HAO Jun
    Journal of Systems Science and Mathematical Sciences. 2025, 45(8): 2363-2375. https://doi.org/10.12341/jssms240834
    Abstract & Html ( ) Download PDF ( )   Knowledge map   Save
    Considering the high volatility and complexity of electricity spot price time series, a combined forecasting model based on wavelet transform and LGBM (light gradient boosting machine, LGBM) is proposed. By introducing rolling time window and wavelet transform, the dynamic multi-scale decomposition of electricity spot price series can be realized, and the frequency characteristics can be extracted to reduce its modal complexity and effectively avoid data leakage. In this study, the proposed model is constructed by utilizing the complex nonlinear feature extraction ability of the LGBM algorithm. The spot market data of Shanxi electric power is used to verify the validity of the proposed model. The results show that the proposed model is superior to the mainstream forecasting methods such as long-term and short-term memory model, support vector machine, elastic network regression model and extreme gradient lifting model in many key performance indexes, such as root mean square error, average absolute error and determination coefficient, among which the $ R^2 $ reaches 0.9792, showing high forecasting accuracy. At the same time, the proposed model shows robustness and adaptability under different market conditions, which shows the proposed model can be seen as a reliable forecasting tool for power market participants and helps to optimize trading strategies and reduce market risks.
  • ZHANG Yu, LI Kaili, WANG Jinting
    Journal of Systems Science and Mathematical Sciences. 2025, 45(8): 2376-2388. https://doi.org/10.12341/jssms240640
    Abstract & Html ( ) Download PDF ( )   Knowledge map   Save
    Privatization reform is regarded as an effective strategy to reduce waiting times in the public healthcare system. This paper focuses on two modes of privatization reform: One is the competition mode, which allows private hospitals to enter the market and compete with public hospitals; the other is the collaboration mode, where public hospitals and private hospitals cooperate to achieve common goals. This paper employs a queueing model to describe the patient consultation process, analyzes the service rates and prices of public and private hospitals under different privatization reforms, and studies the impact of these reforms on the number of patients covered by medical services, patient waiting times, patient welfare and social welfare. The study finds that the competitive mode can significantly reduce patient waiting time, thereby expanding the number of patients covered by medical services and enhancing patient utility and social welfare. In contrast, while the cooperative mode can also reduce patient waiting time, it exhibits uncertainty in increasing the number of patients, patient utility, and social welfare, and can effectively promote the expansion of the number of patients covered by medical services, patient utility, and social welfare only when the service capacity of public-private partnership hospitals is relatively large or the degree of privatization is high. Finally, when private hospitals choose between the cooperative or competitive mode, it mainly depends on the subsidy rate provided by the government to public hospitals and the level of privatization pursued by public-private partnership hospitals for their own interests. Specifically, when the subsidy rate or the level of privatization is high, private hospitals are more inclined to choose the cooperative mode; conversely, they are more inclined to choose the competitive mode.
  • ZHANG Yuwei, LI Zhenping, LI Xin, ZHANG Ziting, FANG Yong
    Journal of Systems Science and Mathematical Sciences. 2025, 45(8): 2389-2411. https://doi.org/10.12341/jssms240843
    Abstract & Html ( ) Download PDF ( )   Knowledge map   Save
    Aiming at the dynamic changes in freshness levels of fresh products over time, a joint optimization problem of fresh products allocation and cold chain distribution with multiple levels of freshness is studied. Considering the constraints of both soft and hard time windows and customers' requirements for product freshness, a joint optimization model for fresh product allocation and cold chain distribution is constructed with different levels of freshness product allocation and cold chain distribution paths as decision variables. Under the premise of satisfying constraints such as vehicle capacity, hard time windows, and product quality, the objective function is to minimize the sum of fixed vehicle costs, transportation costs, refrigeration costs, penalty costs for violating soft time windows, product damage costs, and customer stockout losses. A two-stage hybrid heuristic algorithm based on large neighborhood search is designed according to the characteristics of the model. We use the Solomon dataset to construct the examples and solve the models by Gurobi solver and the two-stage heuristic algorithm, respectively. The results verify the correctness of the model and the fast effectiveness of the two-stage heuristic algorithm. The superiority of joint optimization of multi-level freshness product allocation and cold chain distribution is verified by comparing with the staged optimization strategy, and the results show that the joint optimization strategy can significantly reduce the cost of expired and spoiled goods of fresh products and the loss of customers' out-of-stock loss, which can reduce the total distribution cost by about 45% on average. Finally, the effectiveness of the algorithm in solving practical problems is verified through a real case.
  • YAN Botao, WANG Xihui, FAN Yu, SHAO Jianfang, WANG Jun
    Journal of Systems Science and Mathematical Sciences. 2025, 45(8): 2412-2427. https://doi.org/10.12341/jssms240851
    Abstract & Html ( ) Download PDF ( )   Knowledge map   Save
    In-kind donation is one of the important sources of relief supplies when facing various emergency events. However, once lack of suitable coordination and management, in-kind donations will pile up and prevent the delivery of other relief supplies, which leads to `material convergence' and brings troubles to relief operations. An efficient approach to solve this problem is to screen and filter the in-kind donations. However, there lacks studies in relevant literature on how to process the screening and filtering, and how many labors are in need. Hence, in this paper, we incorporate the considerations of capacity building of relief organizations into the screening and filtering of in-kind donations, and simulate the process based on queue theory. The optimal strategy is then determined by optimizing the effectiveness of the relief operation based on deprivation cost. We consider three situations including no screening and filtering, sufficient capacity and insufficient capacity. Through constructing mathematical programming models and conducting numerical experiment, we find that in most situations screening and filtering will increase the effectiveness of relief operations, the relief organizations will first guarantee their screening and filtering capabilities when the budget is sufficient, but they will also consider the balance of budget allocation.
  • SONG Pingfan, CHEN Xu, DAI Qianzhi, LI Lin, LEI Xiyang
    Journal of Systems Science and Mathematical Sciences. 2025, 45(8): 2428-2446. https://doi.org/10.12341/jssms240227
    Abstract & Html ( ) Download PDF ( )   Knowledge map   Save
    Employing spatial econometric and two-stage shared input DEA models to measure the collaborative innovation efficiency of 30 selected provincial regions in China from 2012 to 2020, we analyze the trait evolution and dynamic trends of collaborative innovation efficiency. The findings indicate the following: 1) Currently, the collaborative innovation development in China exhibits a pattern of “high in the east and low in the west”, with economically developed regions generally demonstrating higher collaborative innovation efficiency than less developed areas. 2) The collaborative technology R&D efficiency shows an inverse V-shaped trend, while the trend for collaborative achievement transformation efficiency is opposite to that of collaborative technology R&D efficiency. Additionally, during the sample years, China's overall collaborative achievement transformation efficiency has surpassed the collaborative technology R&D efficiency. 3) The disparity in overall collaborative innovation efficiency between developed and less developed regions in China is widening, showing a certain degree of polarization. The eastern regions focus more on collaborative achievement transformation capabilities and lead other regions in overall collaborative innovation efficiency. 4) The differences between the eastern, central, western, and northeastern regions are the main reasons for the disparities in provincial collaborative innovation efficiency, and the contribution of regional differences to the overall variation has been increasing. 5) We also analyze the collaborative innovation efficiency of the typical collaborative innovation communities, we find that after considering regional collaborative innovation, both the efficiency of technology research and development and the efficiency of achievement transformation are higher than those in the situation without taking collaboration into consideration. The heterogeneity lies in that Beijing-Tianjin-Hebei reveals its advantages of collaboration in both technology R&D and achievement transformation, while the Yangtze River Delta reveals its advantage of collaboration mainly in the process of achievement transformation.
  • WU Hongxu, FANG Yong, DENG Zhibin
    Journal of Systems Science and Mathematical Sciences. 2025, 45(8): 2447-2465. https://doi.org/10.12341/jssms240199
    Abstract & Html ( ) Download PDF ( )   Knowledge map   Save
    With the rapid development of deep learning technology, its application in the field of asset pricing has attracted widespread attention. This paper delves into the theoretical foundation of factor pricing models and proposes a latent factor model constructed with deep neural networks based on characteristic ranking. The proposed model overcomes the limitations of traditional factor models in dealing with nonlinearity and hypothesis testing. It uses appropriate activation functions to accurately simulate the real process of portfolio construction. In the empirical analysis of the China A-share market, the proposed deep neural network model significantly outperforms the benchmark models in performance for out-of-sample prediction and achieves the highest cumulative returns and the best Sharpe ratio in constructing mean-variance efficient frontier portfolios. Furthermore, by analyzing the importance of the characteristic gradients of the model outputs, it is found that the monthly returns of the A-share market are significantly influenced by transaction-related factors, reflecting the unique characteristic of the Chinese stock market as an emerging market. This paper provides new insights into the construction of latent factor models and the patterns of market behavior of the A-share market.
  • LEI Xiyang, QIU Weiyan, CHENG Yuanyuan
    Journal of Systems Science and Mathematical Sciences. 2025, 45(8): 2466-2483. https://doi.org/10.12341/jssms240119
    Abstract & Html ( ) Download PDF ( )   Knowledge map   Save
    The new round of power system reform has introduced a power transmission and distribution pricing mechanism based on the principle of “permitted costs + reasonable returns”. This cost-based regulatory mechanism has become difficult to adapt to the requirements of high-quality development of the power industry in the new era. Therefore, it is urgent to incorporate effective incentives and the heterogeneous operating environments that faced by grid companies in different regions into the design of power transmission and distribution pricing mechanisms. In this sense, we propose an incentive-based power transmission and distribution cost regulation method that combines the meta-frontier super-efficiency DEA model with yardstick competition regulation. Firstly, we divide 25 provincial grid enterprises into six groups based on the similarity of their external environments and calculate their operating efficiencies under the common frontier and group frontier. Subsequently, we develop a cost compensation scheme for each grid enterprise based on its efficiency performance across three scenarios. The scheme aims to incentivize grid enterprises with excellent efficiency both in the group frontier and in the common frontier, while also safeguarding the production incentives of grid enterprises with mediocre performance due to external environmental factors. The study finds that 1) the permitted costs obtained by provincial grid enterprises based on our methodology have improved, and the sum of the permitted cost adjustments has decreased significantly; And 2) a few grid enterprises obtain positive incentives based on our methodology, and the majority of enterprises obtain negative incentives. Finally, we present some policy recommendations for the reform of transmission and distribution tariffs.
  • WANG Cong, LUO Gongzhi
    Journal of Systems Science and Mathematical Sciences. 2025, 45(8): 2484-2499. https://doi.org/10.12341/jssms240125
    Abstract & Html ( ) Download PDF ( )   Knowledge map   Save
    To effectively address the issues of fuzzy information, preference information, and noise information in sequential decision information systems, a new single-valued intelligent probability rough set model is constructed by integrating singleton fuzzy sets and probability rough set models, based on an improved scoring function. Firstly, considering the fuzziness of singleton fuzzy numbers and subjective preferences, an improved scoring function is defined to establish dominance relationships among objects. Secondly, to enhance the fault tolerance of the model, a single-valued intelligent probability rough set model is introduced by incorporating a conditional probability threshold. A reduction method and rule extraction method based on discernibility matrix are designed according to the properties of probability lower and upper approximation. The effectiveness and applicability of the proposed method are validated through application examples.
  • YANG Li, SHEN Shujian, CHEN Jing
    Journal of Systems Science and Mathematical Sciences. 2025, 45(8): 2500-2516. https://doi.org/10.12341/jssms240138
    Abstract & Html ( ) Download PDF ( )   Knowledge map   Save
    In the context of high-quality development and the “double carbon” goal, the social responsibility of logistics service supply chain, especially green logistics services, needs to be developed and strengthened. From the perspective of different power structures, this paper uses game theory to explore the influence of the decision-making of various stakeholders on the green level, related profits and total social responsibility under the coordination of no contract and no subsidy, no contract and subsidies, and contract and subsidies for the three-level supply chain composed of the government, logistics service supplier and logistics integrator. The results show that the higher the level of social responsibility of the logistics integrator, the higher the green level. From no contract without subsidy, to no contract with subsidy, to contract with subsidy, the green level is getting higher and higher. When there is no contract and subsidy, there is a certain threshold range for the level of social responsibility of the logistics integrator. Under different power structures and subsidy contract combinations, when the logistics service supplier dominated adopts the cost sharing contract and subsidy, the green level is higher; When the logistics integrator dominated adopts the revenue sharing contract and subsidy, the green level is higher.
  • FU Yingxiong, XIE Huajun, ZHAN Jingjing, ZHANG Xin
    Journal of Systems Science and Mathematical Sciences. 2025, 45(8): 2517-2534. https://doi.org/10.12341/jssms240135
    Abstract & Html ( ) Download PDF ( )   Knowledge map   Save
    In view of the current increasingly severe air pollution problem in China, it is especially critical to conduct an effective atmospheric environmental efficiency assessment. This paper proposes a dynamic network slacks-based measure (NSBM) model that considers the internal structure of the system and the inter-period activities in the neighboring periods to dynamically evaluate the inter-provincial atmospheric environmental efficiency in China. The model overcomes the shortcomings of the traditional network data envelopment analysis (DEA) method that ignores the lagged effects of carry-over variables. The model is applied to analyze the efficiency of the overall atmospheric environment and its sub-stages in 30 selected Chinese provinces (municipalities and autonomous regions) from 2016 to 2021. The empirical results show that: 1) In general, the overall efficiency of the atmospheric environment and the efficiency scores of each sub-stage in Eastern China are higher than those in central and Western China; 2) Low pollutant generation efficiency combined with high atmospheric pollutant control efficiency, or high pollutant generation efficiency combined with low atmospheric pollution control efficiency, are the main reasons for the overall inefficiency of the atmospheric environment in some provinces; 3) In the Beijing-Tianjin-Hebei region, Beijing and Tianjin have higher overall atmospheric environmental efficiency and sub-stage efficiency than Hebei Province.
  • WANG Cuixia, LIU Yilin
    Journal of Systems Science and Mathematical Sciences. 2025, 45(8): 2535-2553. https://doi.org/10.12341/jssms240198
    Abstract & Html ( ) Download PDF ( )   Knowledge map   Save
    To systematically analyze the dynamic and profound impact of the ripple effect of soybean import interruption risk on supply chain performance, and form effective response strategies, we establish a system dynamics model of three-level soybean supply chain in this paper, which composed of import and domestic supply, domestic soybean market inventory and demand, and simulated the cascading effects of different import interruption degrees on soybean market inventory, price, and demand. Further, we conduct parameter regulation simulations to analyze the alleviative effects of two policies, namely increasing soybean production capacity and replacing or reducing the proportion of soybean meal in feed, on ripple effects, respectively. The simulation results indicate that: 1) The interruption of import will have a large negative impact on the domestic soybean market inventory and price, and if the interruption lasts for a long time, the market inventory may be in short supply for a long time, the soybean market price will skyrocket and it will be difficult to return to the normal price for a long time, and the ripple effect will be harmful and long-lasting; 2) The impact of soybean import interruption on the domestic market price lags behind its impact on the domestic soybean market inventory. Therefore, if the interruption duration is not long, its impact on the domestic market price is small; 3) Due to the limited ability to improve the yield level of soybean per unit area and expand the planting area, the implementation of soybean productivity improvement strategy has little effect on reducing the negative impact of the ripple effect; 4) The implementation of the soybean meal reduction replacement policy has a certain inhibitory effect on the decline of market inventory and the rise of market price after the interruption of import, and the effect is more obvious. With the reduction of the proportion of soybean meal in feed, the negative cascade effects of import interruption on soybean market inventory and market price are significantly reduced, the minimum supply and demand ratio is increased, and the time of short supply is significantly shortened.
  • YU Tianhui, LONG Xianjun
    Journal of Systems Science and Mathematical Sciences. 2025, 45(8): 2554-2566. https://doi.org/10.12341/jssms250160
    Abstract & Html ( ) Download PDF ( )   Knowledge map   Save
    Stochastic variance reduction algorithm is an effective method to solve large-scale machine learning, which has been widely concerned by many scholars in recent years. However, how to choose the appropriate step size of such algorithms is still worth studying. In this paper, an adaptive accelerated stochastic variance reduction algorithm based on BB step size is proposed to solve stochastic convex optimization problems. Under the assumption of the strong convexity, it is proved that the algorithm has linear convergence rate. Finally, numerical experiments show the effectiveness and superiority of the new algorithm.
  • LAI Kai, LI Huan
    Journal of Systems Science and Mathematical Sciences. 2025, 45(8): 2567-2580. https://doi.org/10.12341/jssms250220
    Abstract & Html ( ) Download PDF ( )   Knowledge map   Save
    With the acceleration of the digital transformation of the catering industry, users' online evaluation information has shown explosive growth. Effective analysis of catering evaluation has dual value for consumer decision optimization and business service improvement. This study proposes a comprehensive evaluation method of catering stores based on user comments, which aims to quantify the fuzziness of user preferences and improve the accuracy of evaluation. Firstly, based on user comment data from Meituan and Dianping, the hierarchical evaluation index system including environment, service, taste, price and health is constructed by TF-IDF high-frequency word extraction and LDA topic model mining; Secondly, the multi-dimensional fuzzy evaluation of users is transformed into probabilistic language terms, and the language set representation model including probability distribution is constructed to quantify the uncertainty of comments; Finally, the index weight is calculated by entropy method, and the comprehensive score of catering stores is generated by weighted linear combination. Based on the empirical analysis of 8 catering stores in Jinshui District of Zhengzhou City, Henan Province, the score ranking of the model output is highly consistent with the actual user experience, which verifies its practical value in reducing consumer decision-making costs and guiding merchants to accurately optimize operation strategies.
  • GU Nannan, XING Mengjie, LIN Peng, CHEN Haibao
    Journal of Systems Science and Mathematical Sciences. 2025, 45(8): 2581-2598. https://doi.org/10.12341/jssms240506
    Abstract & Html ( ) Download PDF ( )   Knowledge map   Save
    Semi-supervised graph-based dimensionality reduction is a kind of meth-od that utilizes data structure graph to deal with semi-supervised dimensionality reduction problem. However, most of these algorithms only take account of data information while ignore class label information; And they don't take account of the differences among samples in the training process, which reduces the robustness of the algorithms in the case of noise or outliers. In this paper, by combining sparse representation with self-paced learning, a self-paced learner is proposed to obtain the linear dimensionality reduction mapping based on sparse discriminant graph. In detail, the proposed method firstly constructs a sparse discriminant graph by integrating the propagation of class labels with sparse representation of data. Then, by considering the distance between each low-dimensional data point and the corresponding class anchor, and the ability of low-dimensional data to maintain the discriminative sparse structure of the original high-dimensional data, this paper proposes a self-paced learning problem for dimensionality reduction. On the one hand, the proposed method constructs a sparse discriminant graph that can extract the discriminative information of data more effectively; On the other hand, the proposed method is based on self-paced learning mechanism, which makes it can automatically calculate the importance values of training data, suppress the negative impact of unreliable data or labels, and improve the robustness of the model to noise or outliers. The results of five experimental data sets demonstrate the effectiveness of the proposed algorithm.
  • ZOU Feng, CUI Hengjian, LIANG Wanfeng
    Journal of Systems Science and Mathematical Sciences. 2025, 45(8): 2599-2615. https://doi.org/10.12341/jssms240530
    Abstract & Html ( ) Download PDF ( )   Knowledge map   Save
    This paper proposes a location-scale invariant stable correlation (for short ISC) to measure the dependence between two random vectors vRr and yRq, where r ≥ 1, q ≥ 1. The ISC satisfies 0 ≤ ISC(v, y) ≤ 1 and if and only if v and y are independent, ISC(v, y) = 0. Based on the ISC, we further develop a new feature screening procedure called ISC-SIS. ISC-SIS does not require commonly used model assumptions and finite moment assumptions, and it can be directly used to screen grouped covariates and multiple responses. In theory, we establish the sure screening property and ranking consistency property of ISC-SIS. Numeric simulation studies and a real data analysis both indicate that ISC-SIS has quite strong competitiveness compared to the existing screening procedures.
  • TAO Tielai, YU Kaizhi
    Journal of Systems Science and Mathematical Sciences. 2025, 45(8): 2616-2633. https://doi.org/10.12341/jssms240097
    Abstract & Html ( ) Download PDF ( )   Knowledge map   Save
    This study presents the construction of a $p$ th-order autoregressive integer-valued time series model, predicated on a Poisson thinning operator. This model is characterized by its inherently time-varying parameters, which may conform to a particular random distribution. Building upon this foundation, we derive the theoretical attributes pertaining to its ergodicity, point estimation, interval estimation, and the associated statistical properties of hypothesis testing. Additionally, we propose a variable selection methodology, expressly tailored for this model, and substantiate its theoretical underpinnings. The validity and reliability of these properties are thoroughly corroborated through meticulous numerical simulations. Ultimately, the practical applicability and robustness of this model are vividly demonstrated through its successful deployment within an empirically derived real-world data set.
  • HUANG Tingting, ZHANG Baoxue
    Journal of Systems Science and Mathematical Sciences. 2025, 45(8): 2634-2651. https://doi.org/10.12341/jssms240624
    Abstract & Html ( ) Download PDF ( )   Knowledge map   Save
    Existing clustering methods for compositional data often fail to handle zero components. To address this limitation, we propose a clustering method specifically designed for compositional data with zero components. Considering the dependency of EM algorithm on the number of clusters and the initial values, a robust clustering algorithm with fixed initial values is developed by adding the penalty term of information entropy in the objective function. Numerical simulation experiments demonstrate that the algorithm can accurately and adaptively determine the number of clusters. Furthermore, compared to clustering methods based on $\alpha$-transformation, the new method has better performance in recognizing the distribution patterns of compositional data. An analysis of household consumption structure survey data highlights the utility of the proposed method.
  • YANG Lin, YU Fengmin, FANG Sha
    Journal of Systems Science and Mathematical Sciences. 2025, 45(8): 2652-2661. https://doi.org/10.12341/jssms240883
    Abstract & Html ( ) Download PDF ( )   Knowledge map   Save
    We discuss the maximum information subspace of square integrable functional data in $L^{2}$ in order to find a suitable low dimensional projection subspace, which reserves the most information of the original functional data in all of the same dimension subspace to achieve dimensionality reduction while preserving important information. In this paper, the existence of the subspace is proved by the convex optimization method, it is further proved that the subspace with the eigenfunctions corresponding to the first $m$ largest eigenvalues of the sample covariance operator of the functional data as base functions is the maximum information subspace. Then, from the perspective of information reconstruction, it is proved that the subspace is the most powerful space for reconstructing the original functional data. Finally, the 2 dimensional maximum information subspace for functional data of 35 weather stations in Canada is studied. It is found that the cluster analysis results in this space are consistent with those based on discrete data. This shows that projecting functional data onto the maximum information subspace not only dynamically presents the overall characteristics of each category from the perspective of the function, but also ensures the reliability of the clustering results by retaining the maximum information of the original data.
  • GUO Hongjian, LU Min, LIN Jinguan, DU Yukun
    Journal of Systems Science and Mathematical Sciences. 2025, 45(8): 2662-2679. https://doi.org/10.12341/jssms240667
    Abstract & Html ( ) Download PDF ( )   Knowledge map   Save
    In this paper, considering the limitations of traditional noise processing methods, a novel approach for handling noisy datasets based on the concept of feature subspace interpolation is proposed, termed RELIS (robust equidistant linear interpolation synthesis). First, the original feature space is divided into multiple feature subspaces with approximately equal samples through unsupervised clustering. Second, based on the clustering results, the concept of the traveling salesman problem (TSP) is introduced to order the feature subspaces. Next, by combining a soft parameter sharing mechanism, linear fitting is applied to samples in adjacent subspaces. Finally, an innovative multi-stage minimum weight matching method is proposed to obtain an optimal interpolation matching strategy. This paper theoretically demonstrates the optimization effect of the RELIS method for noise of different distributions and further validates it through simulation experiments.
  • HU Shiqiang, CHEN Zhijun
    Journal of Systems Science and Mathematical Sciences. 2025, 45(8): 2680-2700. https://doi.org/10.12341/jssms240118
    Abstract & Html ( ) Download PDF ( )   Knowledge map   Save
    In recent years, the loss control strategy of adjusting the payment structure according to the actual mortality rate so as to rationally share the longevity risk between the annuity holders and issuer has become an increasingly popular study topic in the field of longevity risk. Based on the Bayesian Markov chain Monte Carlo algorithm, this paper analyzes the three paths of longevity risk sharing of annuities under a unified computational framework, i.e., the emergency-fund fixed annuities EFA, the mortality annuities MA with the annuity payment fully linked to the actual mortality rate and the minimum guaranteed mortality annuities MGMA with the minimum guaranteed benefit, and evaluates the effect of longevity risk sharing of the various paths. The effects of longevity risk-sharing are also evaluated. The paper finds that: 1) Compared to FA, MA completely transfers the longevity risk borne by the annuity issuer, and its actuarial benefit spread $\Delta_{M A}^{\alpha}$ has an obvious advantage, and its attractiveness increases with the increase of the risk aversion $\alpha$ of the annuity holders; 2) the MGMA further corrects the downside risk exposure of the MA benefits, and has a benefit advantage over MA at 0.90 times the minimum guaranteed benefit, and annuity issuers' reserve deficits and solvency capital improve relative to traditional annuities. The paper conducts a robustness test under the influence of risk attitude, interest rate, portfolio size and other parameters, and concludes that MGMA annuities realize the reallocation of systematic longevity risk between annuity holders and issuers, and are effective in improving the longevity risk faced by commercial annuity business.