Taking Huancheng River in Hefei City as the study site, machine learning models such as linear regression, random forest, support vector regression, and lasso regression were utilized to establish the relationship between Landsat8 satellite data and water quality parameters, model the reflectance and water quality parameters of remote sensing image values, and compare the performance of four different models. Results showed that the random forest model performed best, and the accuracy of the inversion models for total nitrogen (TN), total phosphorus (TP), and ammonia nitrogen (NH3-N) was above 0.7. The concentration distribution map of water quality parameters showed that the pollution of TN and TP was the most significant in the northeast section of Huancheng River, while NH3-N was most present in the southwest section. The water eutrophication distribution map shows that the water body in the eastern section of the Huancheng River showed a moderate nutrition state.
Oxygen-releasing materials are often used in the treatment and restoration of urban waters as an important method to enhance dissolved oxygen. The development of materials with slow-release property can improve the durability and stability of oxygen release in practical engineering. This paper reviews the preparation methods and oxygen release performance of the slow-release oxygen materials reported in recent years. Moreover, the effects and mechanisms of slow-release oxygen materials on the occurrence, migration, and transformation of pollutants such as nutrients in sediments and overlying water of rivers and lakes are reviewed. Finally, prospects and suggestions for the application of slow-release oxygen materials in the remediation of rivers and lakes are proposed.
To judge ecosystem health of the shallow lakes along the shores of Taihu Lake in the process of ecological restoration, water body of Jinshugang polder was investigated during 2022. The comprehensive assessment index system of water ecological health was constructed, consisting of the target layer, criterion layer, and index layer, of which the criterion layer was composed of three items: function, integrity, and stability. The index layer was composed of 14 major items such as comprehensive water quality and nutritional status and 28 small items such as pH, temperature, and dissolved oxygen. The results showed that during the process of ecological restoration, functional evaluation index reached the highest in autumn, integrity evaluation index was better than that in spring and summer, stability evaluation index was the best in summer, with 70% of the points at the “healthy” level, and the comprehensive evaluation index of aquatic ecological health continued to increase. The results and system of comprehensive evaluation of water ecological health formulated for shallow lakes in Taihu Lake are not only important for subsequent restoration and management, but also provide a reference for water ecological restoration and evaluation of other lakes.
Industrial park wastewater is characterized by various components, changeable water quality, complex pollutant factors, poor biodegradability, and high emission standards. A full-scale industrial park wastewater treatment plant in Deqing was used as an example to investigate the technical-economic feasibility of a process combining hydrolysis acidification, anaerobic-anoxic-oxic-anoxic-oxic (A2/O+AO), and Fenton oxidation in treating wastewater from various enterprises, primarily printing and dyeing, food manufacturing, and metal processing factories. The effluent chemical oxygen demand, ammoniacal nitrogen, total nitrogen, and total phosphorus stably met the required discharge limits for Urban Sewage Treatment Plants (DB33/2169—2018), while other indicators reached Grade A standard for Urban Sewage Treatment Plants (GB18918—2002). The engineering investment and actual operation costs of the wastewater treatment plant were 8200 and 2.39 yuan/m3, respectively.
Suppressing water glint pollution from remote sensing images and reconstructing image information are effective ways to improve the quality of UAV (unmanned aerial vehicle) remote sensing information and increase water environment monitoring areas. It is difficult to apply traditional glint information reconstruction algorithms to UAV hyperspectral images. This study proposes an algorithm for automatic glint detection, removal, and information reconstruction. First, NDWI (normalized difference water index) was used to extract the water body, and the lowest value of the sum of grayscale images in the entire band was used as a threshold to segment the glint, and the Laplace operator was used to extract the glint texture. The difference between the two areas was calculated through multiple rounds of morphological expansion and threshold updates. The lowest difference occurrence frequency was obtained by voting, and the best threshold was obtained in reverse to remove the glint automatically. Then, we determined the matching bands based on principal component analysis and compared the minimum similarity of matching blocks of different sizes to obtain the best size of the image blocks. Finally, we used an improved Criminisi algorithm to reconstruct the flare removal region. The removal algorithm was applied to four real glint scenarios with a removal rate > 99%; the reconstruction algorithm results are superior to those of other algorithms both subjectively and objectively, and the difference between the variation coefficient of each band of the glint reconstruction for water and normal water was within 1%, indicating good spectral application capability.
In this study, constructed wetland structures were established using different substrates (gravel and aerated concrete), with or without cannas planting. Subsequently , the effects of different working conditions on rainwater runoff pollution were investigated through small experiments. The cost of the constructed wetland with aerated concrete and cannas (4.20 yuan/working condition) was slightly higher (1.00 ~ 2.90 yuan/working condition higher) than those under other working conditions. However, in both cycles of operation (i.e., alternating operation for 30 h and drying for 48 h), the average removal rates of ${\rm{NH}}_4^+ $-N, ${\rm{NO}}_3^- $-N, TN, ${\rm{PO}}_4^{3-} $, TP and CODCr were 73.3%, 47.0%, 85.4%, 56.4%, 76.0%, and 65.5%, respectively. These values were higher than those under other working conditions by an average of 10.9% ~ 18.8%. Thus, this constructed wetland structure had the best effect and the highest cost performance. Therefore, it is suitable for promotion and application.
In this study, a pilot-scale tidal-flow paddy wetland system based on multifunctional coupling was constructed to treat land-based aquaculture tailwater of Macrobrachium rosenbergii. This study explored the purification ability, CH4 emissions, arthropod diversity, and comprehensive benefits of the tidal-flow paddy wetland system to provide a scientific basis for the application of this system. The results showed that the tidal-flow paddy wetland system could effectively purify land-based aquaculture tailwater. The removal capacities of dissolved inorganic nitrogen, total nitrogen, dissolved inorganic phosphorus, and total phosphorus were approximately 54.3%, 44.9%, 42.9%, and 43.0%, respectively. Simultaneously, the system had no negative impact on the external environment and indirectly purifies river water. Compared with conventional paddy fields, the tidal-flow paddy wetland system reduced CO2 and CH4 emissions by 5.4% and 92.5%, respectively. Compared to conventional paddy fields during the flooding period, the abundance of the mcrA gene in the tidal-flow paddy wetland decreased by 82.3%. Moreover, the tidal-flow paddy wetland system improved biodiversity and natural enemy abundance/pest abundance, inhibited pest outbreaks, supported more species, and increased comprehensive benefits compared to the control.
Since the Yingwuzhou Wetland was established over five years ago, we have conducted comprehensive field investigation and monitoring. Constructing a reliable evaluation system with long-term monitoring data is important for the evaluation of coastal ecological restoration projects. Here, we used the emergy analysis method and collected the relevant data through field research, scientific monitoring, and literature review to construct an energy analysis structure chart and emergy value index system for the Yingwuzhou Wetland. The main emergy indexes, such as the natural assets and ecosystem services of the wetland, were analyzed, and its functional performance was compared for different restoration periods. The results show that the total emergy of the natural assets in the Yingwuzhou Wetland in 2021 was 8.92 × 1016 sej, which is equivalent to the emergy-monetary value of 2.247 × 105 yuan; the total emergy of ecosystem services was 8.88 × 1017 sej·a–1. After the implementation of restoration, the ecological quality of Yingwuzhou was significantly improved, and its natural assets and ecosystem service emergy were 5.01 and 5.73 times higher than those before restoration. The emergy self-support ratio (ESR) of the Yingwuzhou Wetland ecosystem was 0.47, and the emergy yield ratio (EYR) and emergy sustainable index (ESI) were 28.29 and 25.03, respectively, indicating that the wetland had high output efficiency and suitable space for sustainable development. This study shows that based on long-term monitoring data, the emergy analysis method can better reflect the effectiveness of coastal ecological restoration projects, and the evaluation system and method can provide reference for similar coastal restoration projects in the future.
Due to the influence of tidal processes, sampling and study of microplastics in estuarine areas have been hampered by inconsistent research methods and large data errors. In this study, whole-water depth sampling was conducted in the Jiulong River estuary using the pumping method in August of 2019. The abundances and distribution patterns of microplastics among different water layers and stations were analyzed and compared with research studies performed using different sampling methods. The results showed that the microplastic abundances in the surface, middle, and bottom waters of the Jiulong River estuary were markedly different and influenced by tidal effects. The abundances of microplastics obtained by different sampling methods were also significantly different. The abundance of microplastics in the surface water was significantly higher than the abundances in the middle and bottom waters near the source of pollution, and the abundances of microplastics in the middle and bottom waters were higher than the abundance in the surface water within the main estuary, which is subject to strong tidal action and has obvious stratification. The pumping method was more effective than the trawling method at retaining plastic fibers. The volume of water sample filtered by the pumping method and the size of the filtering mesh had significant effects on the abundances and sizes of the obtained microplastics. Different sampling methods lead to considerable differences in microplastic abundance results, and it is necessary to take tidal effects into account during microplastic monitoring in tidal estuaries. Therefore, it is recommended that operational monitoring and flux observations of microplastics in tidal estuaries be established and that sampling methods for observation of full tidal periods of flood and dry seasons and high and low tides should be used.
Large cascade reservoirs in basins impound water in late summer and early autumn and release water in the dry season of the following year. These activities affect seasonal river discharge into the sea which, in turn, affects saltwater intrusion in estuaries and the utilization of freshwater resources. This study evaluated the effective storage capacity of large cascade reservoirs and the value of cross-basin water transfers by the South-to-North Water Transfer Project in the Yangtze River Basin. The estuarine and coastal three-dimensional numerical model ECOM-si was used to simulate and analyze the impact of major projects on estuarine saltwater intrusion and freshwater resources. In 2020, the effective storage capacity of large reservoirs built in the middle and upper reaches of the Yangtze River Basin was 70.611 billion cubic meters with a mean reduction in monthly river discharge of 13,398 m3/s during the storage period of September to October. By 2035, the completion of additional reservoirs in the basin will raise the total effective storage capacity of these reservoirs to 94.388 billion cubic meters and reduce the average monthly runoff by 17909 m3/s during the storage period. Using data on average monthly river discharge measured at the Datong Hydrological Station from 1950 to 2020, and by taking into account variations in river discharge by major projects in the basin, the average monthly river discharge from August to October from 2020 to 2035 in regular- and extra dry hydrological years was calculated. Numerical simulation results show that saltwater intrusion from September to October will increase due to impoundment in cascade reservoirs and decreased river discharge. During regular hydrological years, freshwater can be obtained from the four water reservoirs in the South Branch of the Yangtze River Estuary from September to October. However, water from the Dongfengxisha, Taicang, Chenhang, and Qingcaosha reservoirs is unsuitable for water intake during these months, particularly in extremely dry years. In 2020, the total number of consecutive days with unsuitable water intake from the four reservoirs was 28.75, 24.99, 29.63, and 37.47 days, respectively, and is predicted to rise to 46.53, 44.18, 47.56, and 50.75 days, respectively, in 2035. The impoundment of basin reservoirs in late summer and early autumn during average- and extremely dry hydrological years exposes them to strong northerly winds which can significantly decrease water intake. Basin reservoirs should reduce storage capacity and release water during extremely dry years to ensure the safety of freshwater resources in the Yangtze River Estuary.
To clarify the effects of global warming on dark carbon fixation (DCF) in eutrophic estuaries, the rates of total DCF and DCF driven by ammonia-oxidizing microorganisms (DCFAOB) were studied under various water temperatures and nitrogen concentrations using 14C labeling (NaH14CO3) and the allylthiourea (ATU) inhibitor method. The Yangtze River Estuary was used as a study area and sampling locations were set up in the estuary and offshore locations. The DCF rates in the Yangtze River Estuary ranged from 0.23 to 0.33 μmolC·L–1·d–1 and that DCFAOB rates accounted for 4.13% to 43.61% of the DCF. Although DCF rates increase significantly under optimum temperatures, the increase was more obvious with changes in ambient temperature under low salinity. The optimum temperatures for DCF in areas of low and high salinity were found to be 30℃ and 25℃, respectively, with the addition of ammonia-nitrogen at these conditions significantly increasing the DCF rates. The results of this study reveal how dark carbon fixation in estuarine water can change when subjected to environmental temperature changes, thereby providing theoretical support and data references to aid in the comprehensive understanding and scientific assessment of carbon fixation and carbon sink flux in estuarine ecosystems.
Coastal salt marsh wetlands have high productivity and low decomposition rates owing to long-term flooding, and these wetlands store a large amount of soil organic carbon. As newly restored salt marsh wetlands develop, changes in vegetation growth traits, soil physicochemical properties, and organic carbon content affect their carbon sequestration function. In this study, using a restored salt marsh wetland in Hengsha (Chongming, Shanghai) as an example, changes in the vegetation growth characteristics and soil organic carbon content of different vegetation communities at varying developmental ages were analyzed using the spatiotemporal substitution method. Key factors affecting the carbon sequestration capacity of these restored wetlands were also identified. The results showed that the organic carbon content in newly restored salt marsh wetlands increased with developmental age over 0 ~ 20 years. Soil porosity and water content were effective indicators of soil organic carbon content changes. The newly restored wetlands had a high soil carbon density, with a total organic carbon density of (21.49 ± 3.67) tC·hm–2 in the 0 ~ 20 cm soil layer of the eight-year-old wetland, similar to that of the natural wetland. The vegetation growth and carbon sequestration capacity of Phragmites australis were higher than those of Scirpus mariqueter and their ecotone.
We explored nutrient distribution in water and heavy metal contamination in sediments after rice harvest in a rice-shrimp co-cropping system. Additionally, we assessed aquatic ecological risks by evaluating molecular ammonia toxicity and heavy metal levels in rice-shrimp fields and systematically analyzed the ecotoxicity of nutrients and heavy metals in water in the co-cropping system by monitoring physicochemical indices in water during the late cultivation period in four rice and shrimp co-cultivation fields. After rice harvesting, the water showed high pH (9.25) and the total nitrogen concentration, ammonia nitrogen, and COD reached 14.15, 11.49, and 92.01 mg/L, respectively. In perennial rice-shrimp co-cropping systems, elevated levels of ωAs (16.21 mg·kg–1) and ωCd (0.20 mg·kg–1) were found in sediments, exceeding natural baseline levels by 2.35 and 1.72 fold, respectively. Levels of other heavy metals were lower, in addition, the concentration of heavy metal was lower than the baseline levels of the sediments. The potential ecological risk index and our potential biological toxicity evaluation revealed low ecological risks posed by heavy metals in rice-shrimp co-cropping system sediments, which can be attributed to mineral elements required for Procambarus clarkii culture. In conclusion, co-cultivating rice with shrimp can potentially mitigate soil heavy metal pollution.
This study investigated the heavy metal distribution characteristics, assess the pollution status, and identify potential pollution sources. A total of 514 topsoil samples were collected in the public area of the industrial park, and 11 element concentrations were tested. The concentration distribution of heavy metals was characterized on the plane using Geographic Information Systems (GIS). Our research involved the application of various analytical methods, including single factor index analysis, potential ecological risk index assessment, and principal component analysis-absolute principal component scores-multiple linear regression (PCA-APCS-MLR). The results revealed that all 10 heavy metals, except for Cr (Ⅵ) which was below the detection limit, were detected in concentrations surpassing the background values. The regional proportion were Cu (86%) > Cd (71%) > Co (53%) > Ni (50%) > Be (45%) > As (42%) > Sb (40%) > Pb (23%) > V (16%) > Hg (4%), and the concentrations of all major elements were relatively high. Moreover, there were areas with extremely strong (Cd element proportion was 1.7%, Sb element proportion was 1.5%) and very strong (Cd element proportion is 1.0%, Sb element proportion is 0.2%) ecological risk levels. They were all located near large chemical enterprises on the northwest side of the park. The composite ecological risk index of heavy metals in surface soil belonged to moderate ecological hazards, and there was a certain degree of ecological hazard risk. The main heavy metal pollution sources in the surface soil of the study area included chemical and combustion sources, natural sources, and transportation sources, accounting for 27.2%, 17.0%, and 11.0% of the pollution load, respectively.
The impact of rapid development of coastal aquaculture on aquatic environments is an important topic in environmental science. Quantitative assessment of the impact of aquaculture on sediment heavy metal pollution has been challenging because of the complex conservative-nonconservative behavior of heavy metals in coastal brackish waters. In this study, Sansha Bay, Fujian Province, the world’s largest yellow croaker cage culture area, was used as a research area for offshore aquaculture. Using aquaculture data recorded by remote sensing images combined with the relationships between sedimentary heavy metals and salinity, this study sought to analyze the effects of aquaculture on sediment heavy metal pollution. The results showed that over the past 15 years, the area of cage culture in Sansha bay has increased from 9.1 km2 to 33.4 km2, and the maximum intensity of cage culture per square kilometer has increased from 3% to 22%. As a result, the average values of Cu, Zn, Cd, and Pb levels in the cuprophilic elements in the culture area increased by 44%, 11%, 15%, and 17%, respectively, compared to non-farmed areas, and the slope of the conservative regression line with an increase in salinity decreased by 27%, 35%, 18%, and 2%, respectively. The average values of the siderophile elements Cr, Mn, and Ni in the breeding area increased by 16%, 15%, and 29%, respectively, compared to those in non-farmed areas. The results of potential ecological risk evaluation showed that Cd is a potential environmental pollutant in the surface sediments of Sansha Bay, and Sansha Bay as a whole is at a medium ecological risk level.
To characterize the effects of stochastic environment and major mutation factors on populations, we consider a class facultative population system based on Markov chains and pure-jump stable processes. First of all, the existence and uniqueness of a global positive solution of the proposed model is discussed. Then, sufficient conditions for ergodicity are specified. Finally, conditions for positive recurrence of the model are presented.
An ${\rm{E}} $ -total coloring of a graph $G $ is an assignment of several colors to all vertices and edges of $G $ such that no two adjacent vertices receive the same color and no edge receive the same color as one of its endpoints. If $f $ is an ${\rm{E}} $ -total coloring of a graph $G $, the multiple color set of a vertex $x $ of $G $ under $f $ is the multiple set composed of colors of $x $ and the edges incident with $x $. If any two distinct vertices of $G $ have distinct multiple color sets under an ${\rm{E}} $ -total coloring $f $ of a graph $G $, then $f $ is called an ${\rm{E}} $ -total coloring of $G $ vertex-distinguished by multiple sets. An ${\rm{E}} $ -total chromatic number of $G $ vertex-distinguished by multiple sets is the minimum number of the colors required in an ${\rm{E}} $ -total coloring of $G $ vertex-distinguished by multiple sets. The ${\rm{E}} $ -total colorings of cycles and paths vertex-distinguished by multiple sets are discussed by use of the method of contradiction and the construction of concrete coloring. The optimal${\rm{E}} $ -total colorings of cycles and paths vertex-distinguished by multiple sets are given and the ${\rm{E}} $ -total chromatic numbers of cycles and paths vertex-distinguished by multiple sets are determined in this paper.
Tridiagonal sign pattern matrices and paw form sign pattern matrices were analyzed with respect to their potential for ensuring algebraic positivity. The necessary conditions allowing algebraic positivity of the two classes of sign pattern matrices were given using combinatorial matrix theory and graph theory. Finally, the equivalent conditions that would ensure algebraic positivity of tridiagonal sign pattern matrices and paw form sign pattern matrices of order $n $ were determined.
An involution ring is called a *r-clean ring if every element is the sum of a projection and a *-regular element. Some extensions of *r-clean rings are discussed, and a characterization of the element in a *-abelian *r-clean ring is given.
In this paper, some sufficient conditions for forced oscillation of impulsive multi-delay fractional partial differential equation solutions with damping term are established by using the method of differential inequalities under Robin and Dirichlet boundary conditions, an example is given to verify the validity of the main results.
With the popularization of university information system applications and the increase in their usage frequency, teachers and students have higher requirements for data consistency, accuracy, timeliness, and completeness. The original data synchronization scheme using extensible markup language (XML) for data synchronization has the disadvantages of low synchronization efficiency and difficulty of expansion. The open-source tool, DataX, can complete data synchronization between various heterogeneous databases without damaging the source database. This study used DataX to improve the original data synchronization scheme and proposed different data synchronization schemes for various business requirements and application scenarios in the foundation of university postgraduate information system construction. At the same time, in view of the shortcomings of DataX in which only one read can do one write during the start-up and execution, the method where one read can do multiple writes was designed. The comparison experiment shows that the optimized scheme can improve data synchronization efficiency, has better scalability, and can meet the data synchronization requirements of universities.
Review-based recommendations are mainly based on the exploitation of textual information that reflects the characteristics of items and user preferences. However, most existing approaches overlook the influence of information from hidden strangers on the selection of reviews for the target user. However, information from strangers can more accurately measure the relative feelings of the user and provide a complement to the target user’s expression, leading to more refined user modeling. Recently, several studies have attempted to incorporate similar information from strangers but ignore the use of information regarding other strangers. In this study, we proposed a stranger collaborative review-based recommendation model to make effective use of information from strangers by improving accurate modeling and enriching user modeling. Specifically, for capturing potential user preferences elaborately, we first designed a collaborative stranger attention module considering the textual similarities and preference interactions between the target user and the hidden strangers implied by the reviews. We then developed a collaborative gating module to dynamically integrate information from strangers at the preference level based on the characteristics of the target user-item pair, effectively filtering preferences of strangers and enriching target user modeling. Finally, we applied a latent factor model to accomplish the recommendation task. Experimental results have demonstrated the superiority of our model compared to state-of-the-art methods on real-world datasets from various sources.
This study explores the multimodal understanding and reasoning for one-stage visual grounding. Existing one-stage methods extract visual feature maps and textual features separately, and then, multimodal reasoning is performed to predict the bounding box of the referred object. These methods suffer from the following two weaknesses: Firstly, the pre-trained visual feature extractors introduce text-unrelated visual signals into the visual features that hinder multimodal interaction. Secondly, the reasoning process followed in these two methods lacks visual guidance for language modeling. It is clear from these shortcomings that the reasoning ability of existing one-stage methods is limited. We propose a low-level interaction to extract text-related visual feature maps, and a high-level interaction to incorporate visual features in guiding the language modeling and further performing multistep reasoning on visual features. Based on the proposed interactions, we present a novel network architecture called the dual-path multilevel interaction network (DPMIN). Furthermore, experiments on five commonly used visual grounding datasets are conducted. The results demonstrate the superior performance of the proposed method and its real-time applicability.
The demands of deep neural network models for computation and storage make them unsuitable for deployment on embedded devices with limited area and power. To solve this issue, stochastic computing reduces the storage and computational complexity of neural networks by representing data as a stochastic sequence, followed by arithmetic operations such as addition and multiplication through basic logic operation units. However, short stochastic sequences may cause discretization errors when converting network weights from floating point numbers to the stochastic sequence, which can reduce the inference accuracy of stochastic computing network models. Longer stochastic sequences can improve the representation range of stochastic sequences and alleviate this problem, but they also result in longer computational latency and higher energy consumption. We propose a design for a differentiable quantization function based on the Fourier transform. The function improves the matching of the model to stochastic sequences during the network’s training process, reducing the discretization error during data conversion. This ensures the accuracy of stochastic computational neural networks with short stochastic sequences. Additionally, we present an adder designed to enhance the accuracy of the operation unit and parallelize computations by chunking inputs, thereby reducing latency. Experimental results demonstrate a 20% improvement in model inference accuracy compared to other methods, as well as a 50% reduction in computational latency.
The surface structure of a container lock pin is complex, making it difficult to establish a point cloud model with a high surface feature integrity. Therefore, a multi-view and multi-attitude point cloud model reconstruction algorithm based on a turntable was proposed to restore the complete surface features of the locking pin. Considering that sensors at a fixed height are paired with rotating turntables in most scenarios, the collected surface features are usually somewhat missing. Initially, the algorithm uses the parameter calibration results of the turntable to realize the multi-view three-dimensional point cloud stitching, and establishes a fixed attitude point cloud model. Then, through the proposed improved spherical projection algorithm, the positioning of the locking pin on the turntable is selected to establish a point cloud model under another posture. Finally, the point cloud model with multiple attitudes is integrated to improve its surface characteristics. Experimental results show that the proposed algorithm can build a lock-pin point cloud model with high surface feature integrity.
Infrared small-target detection has always been an important technology in infrared tracking systems. The current infrared approaches for small-target detection in complex backgrounds are prone to generating false alarms and exhibit sluggish detection speeds from the perspective of the human visual system. Using the multiscale local contrast measure using a local energy factor (MLCM-LEF) method, an infrared small-target detection method based on a double-layer local energy factor is proposed. The target detection was performed from the perspectives of the local energy difference and local brightness difference. The double-layer local energy factor was used to describe the difference between the small target and the background from the energy perspective, and the weighted luminance difference factor was used to detect the target from the brightness angle. The infrared small target was extracted by a two-dimensional Gaussian fusion of the processing results of the two approaches. Finally, the image mean and standard deviation were used for adaptive threshold segmentation to extract the small infrared target. In experimental tests on public datasets, this method improved the performance in suppressing background compared with the MLCM-LEF algorithm, DLEF (double-layer local energy factor) reduced the detection of a single frame time by one-third.
Three-dimensional point cloud semantic segmentation is an essential task for 3D visual perception and has been widely used in autonomous driving, augmented reality, and robotics. However, most methods work under a fully-supervised setting, which heavily relies on fully annotated datasets. Many weakly-supervised methods have utilized the pseudo-labeling method to retrain the model and reduce the labeling time consumption. However, the previous methods have failed to address the conformation bias induced by false pseudo labels. In this study, we proposed a novel weakly-supervised 3D point cloud semantic segmentation method based on group contrastive learning, constructing contrast between positive and negative sample groups selected from pseudo labels. The pseudo labels will compete with each other within the group contrastive learning, reducing the gradient contribution of falsely predicted pseudo labels. Results on three large-scale datasets show that our method outperforms state-of-the-art weakly-supervised methods with minimal labeling annotations and even surpasses the performance of some classic fully-supervised methods.
The remarkable achievements of deep learning in computer vision have led to significant development in example-based texture synthesis. The texture synthesis model using neural networks mainly includes local components, such as convolution and up/down sampling, which is unsuitable for capturing irregular structural attributes in non-stationary textures. Inspired by the frequency and space domain duality, a non-stationary texture synthesis method based on hidden layer Fourier convolution is proposed in this study. The proposed method uses the generative adversarial network as the basic architecture, performs feature splitting along the channel in the hidden layer, and builds a local branch in the image domain and a global branch in the frequency domain to consider visual perception and structural information. Experimental results show that this method can handle structurally challenging non-stationary texture exemplars. Compared with state-of-the-art methods, the method yielded better results in the learning and expansion of large-scale structures.
The image caption generation algorithm based on decoupling commonsense association aims to eliminate the interference of commonsense association between various types of entities on the model reasoning, and improve the fluency and accuracy of the generated description. Aiming at the relationship sentences in the current image description that conform to common sense but do not conform to the image content, the algorithm first uses a novel training method to improve the attention of the relationship detection model to the real relationship in the image and improve the accuracy of relationship reasoning. Then, a relation-aware entity interaction method was used to carry out targeted information interaction for entities with relationships, and the relationship information was strengthened. The experimental results show that the proposed algorithm can correct some commonsense false relationships, generate more accurate image captions, and obtain better experimental results on various evaluation indicators.
This paper presents a method for skinning in character animation, utilizing implicit surfaces, which is designed to deform animated models with skeleton and associated skinning weights.This method reconstructs the mesh around a given skeleton with the Hermite radial basis function and Poisson-disk sampling on surfaces.This process transforms the character’s volume into a set of localized 3D scalar fields and preserves the original mesh properties.Field functions are then constructed and employed to refine the results obtained from the geometric skinning technique.The implicit method, combined with two types of combination operators, generates realistic skin deformations around the human skeleton model finally.The method does not cause candy twist and joint swelling problems, and can handle skin collision and muscle protrusions.Due to its post-processing feature, this method is very suitable for animation generation in standard production pipeline.
The possibility of detecting cosmic torsion originated from large scale Lorentz violation of cosmology at cosmic scale by the shift of energy distribution for massive cosmic neutrinos in spatial-flat FRW (Friedmann-Robertson-Walker) spacetime background is discussed. Massive cosmic neutrino scattering owing to cosmic torsion leads to a shift in the peak position of their final state energy distribution at the order of $m^2/E^2$. Moreover, the Dirac and Majorana neutrino shift values differ by the vector part of the torsion in the non-minimal vector torsion coupling case.
In this paper, we calculate the next-to-leading order (NLO) corrections to the baryon mass and magnetic moment using the covariant chiral perturbation theory within the extended minimal subtraction$({\text{E}}\overline {{\text{MS}}})$scheme under SU(3). We also present a comparative analysis of the experimental data and lattice quantum chromodynamics data with the $ {\text{E}}\overline {{\text{MS}}} $ results, and extrapolate it into physical value. We show that $ {\text{E}}\overline {{\text{MS}}} $ provides a reasonable theoretical and numerical result at the NLO, better than those obtained from the heavy-baryon approach and infrared regularization, and close to that obtained by the extended-on-mass-shell (EOMS) scheme.
Recently, several air shower observatories established that the number of muons produced in ultrahigh-energy cosmic rays from extensive air-showers is significantly larger than that predicted by models. This study confirms that when ultrahigh-energy cosmic rays scatter on air particles, gluon condensation may occur. At this point, the production of strange quarks is significantly enhanced, such that more kaon will be generated in fragmentation products, and the air shower energy will be further distributed to the hadron cascade, which may explain the muon puzzle.
Based on the first-principles calculations and particle swarm optimization algorithm, the crystal structures and physical properties of Th2N2S are examined in the pressure range of 0~200 GPa. Our results successfully reproduce the experimental phase$P\bar {{3}}m1$ at ambient pressure and predicted two new structures at high pressure: the I4/mmm and Cmmm phases. A series of pressure-induced structural phase transitions were determined, namely from the$P\bar {{3}}m1$ phase to the I4/mmm phase, and then to the Cmmm phase, with corresponding phase transition pressures of 48.2 GPa and 156.2 GPa. The phonon dispersion curves and elastic constants of Th2N2S indicate that these three phases are dynamically and mechanically stable. The obtained mechanical properties demonstrate the natural ductility of the $P\bar {{3}}m1$, I4/mmm and Cmmm phases. Among them, the anisotropy degree of the Cmmm phase is the largest. Further, our electronic structure calculations show that the phase transition from the$P\bar {{3}}m1$ to I4/mmm is a semiconductor-metal phase transition.
The crystal structure, stability, electronic structure, and magnetism of two-dimensional transition metal chalcogenides compounds, MX2-MX-MX2 (M = V, Cr, Mn, and Fe; X = S, Se, and Te), were systematically investigated using first-principles calculations based on the density functional theory (DFT). Furthermore, the magnetic coupling mechanisms of these materials were analyzed. The results show that the formation energies of these compounds are negative, indicating that the compounds can be fabricated experimentally. MnS2-MnS-MnS2 and MnSe2-MnSe-MnSe2 exhibit ferromagnetic half-metal properties, whereas CrS2-CrS-CrS2 transforms into a ferromagnetic half-metal under applied stress.
Engineering interfacial complexion (or phase) transitions has been a growing trend in grain boundary and solid surface systems. In addition, little attention has been paid to chemically heterogeneous solid-liquid interfaces. In this study, atomistic simulations are conducted to reveal the coexistence of novel in-plane multi-interfacial states in a Cu(111)/Pb(L) interface at a temperature just above the Pb freezing point. Four monolayer interfacial states, that is, two CuPb alloy liquids and two pre freezing Pb solids, are observed to coexist within two interfacial layers sandwiched between the bulk solid Cu and bulk liquid Pb. Computation of the spatial variations of various properties along the direction normal to the in-plane solid-liquid boundary lines for both interfacial layers presents a rich and varied picture of inhomogeneity and anisotropy in the mechanical, thermodynamical, and dynamical properties. The “bulk” values extracted from the in-plane profiles suggest that each interfacial state examined has distinct equilibrium values and significantly deviates from those of the bulk solid and liquid phases. It also indicates that the “complexion (or phase) diagrams” for the Cu(111)/Pb(L) interface bear a resemblance to those of the eutectic binary alloy systems as opposed to the monotectic phase diagram for the bulk CuPb alloy. The reported data supports the development of interfacial complexion (or phase) diagrams and interfacial phase rules and provides new guidelines for regulating heterogeneous nucleation and wetting processes.
The characteristics of the ground states of Bose-Einstein condensates (BEC) in spin-dependent bilayer square optical lattices are investigated in this paper. The relative twist angle between the two lattices and the interlayer coupling strength are the main tunable parameters that affect the density distribution of the ultracold atoms. When the lowest band of the lattices exhibits a single-well dispersion, the localization of the ultracold atoms in the Moiré lattice can be determined from the twist angle, interlayer coupling strength, number of atoms, and lattice depth. When the lowest band of the lattices exhibits a double-well dispersion, the twist between the lattices leads to the twist of the two spin states. With an increase in interlayer coupling strength, the two twisted spin states will overlap. The results of this work will stimulate further exploration of novel quantum effect with ultracold atoms in twisted optical lattices.
The dynamics of quantum gases with time-varying interactions have attracted research interests owing to recent advances in experimental techniques such as optical Feshbach resonance. A range of novel dynamic behaviors including the Farady pattern and Bose fireworks have been observed in these systems. In this research, the dynamic problem of two harmonically trapped atoms with periodically modulating interaction strength is investigated. Because of the Hamiltonian time dependence, the system energy is an unconserved quantity. However, we may continue to utilize the Floquet theory for the time-periodic Hamiltonian and define its quasi-energy. The exact equations for the quasi-energies of the two-body problem are derived. Upon numerically solving these equations, we identify that the two-body quasi-energy spectrum exhibits various novel behaviors for different driven parameters or frequencies.
The Gross-Pitaevskii equation is widely applied in Bose-Einstein condensate research, yet is rarely analytically determined; thus, it is important to develop a numerical method with high precision to resolve this. Accordingly, a numerical method was developed in this work, considering the splitting step method, Crank-Nicolson algorithm, and Numerov algorithm with four-order accuracy. The corresponding test shows that compared with the finite difference method using five points, the proposed algorithm is more efficient and costs less memory.
In this study, based on the lossy SU(2) and SU(1,1) interferometer models, phase estimation in interferometers was investigated. The general expression for the overestimated quantum Fisher information (QFI), which exists when performing single-parameter phase estimation compared to two-parameter phase estimation, was theoretically studied. In addition, the variation in the overestimated QFI with the loss factor or beam splitting ratio was numerically analyzed with the input of coherent and squeezed vacuum states, and the disappearance and recovery of the overestimated QFI was related to the beam splitting ratio, gain factor, and squeeze amplitude. By adjusting the beam splitting ratio and loss factor, the best sensitivity was obtained, which is beneficial for quantum precision measurements in lossy environments.
In the study of high-order harmonic generation mechanisms, the main focus has been on interband polarization and intraband currents, as well as anomalous current mechanisms caused by Berry curvature. The long-neglected mixture term currents have been obtained by decomposing the strong laser-induced intact currents into contributions from different mechanisms. In this study, the peak amplitude and laser wavelength dependence of the high harmonics generated by different mechanisms were investigated by numerically solving the semiconductor Bloch equations (SBE). The interference between the current mechanisms was explored. It was found that the high-order harmonic spectra induced by the mixture term currents and the interband polarization currents have extremely similar variation patterns and extremely close harmonic intensities, both with the change in wavelength and the change in peak amplitude. Additionally, the anomalous harmonics were found to produce only even harmonics perpendicular to the polarization direction of the laser field. The anomalous harmonics are unique in that they have a minimum during wavelength and peak intensity variations. Analyzing the interference effects between the different mechanisms revealed that the interband polarization harmonics and intraband harmonics (including anomalous harmonics) interfere significantly with each other in the vertical polarization direction, while the interference of mixture term harmonics with intraband harmonics is negligible.
We propose an efficient grating coupler design scheme based on a lithium niobate guided mode structure and its optimized optical excitation configuration. The coupling effect of the grating coupler is numerically analysed using the finite time domain difference algorithm. We study the effects of the grating period, grating duty ratio, silica isolation layer thickness, polarization and angle of incident light on the coupling efficiency of the grating. The spatial light propagation electric field images are simulated for resonant and non-resonant wavelengths. The results show that with a grating period of 650 nm, a grating duty cycle of 0.3 and an etching depth of 130 nm, an optimised grating coupling efficiency of ~38% can be obtained using TM(transverse magnetic) polarised light incident along the grating normal angle of 17°, thus effectively coupling spatial light into the lithium niobate subwavelength waveguide film. This is of great reference value for the design and application performance of LiNbO3 micro-nano grating couplers.
The numerical solution for wave function evolution plays an important role in quantum mechanics research. Many numerical algorithms have been developed for time-independent potential fields. However, multiple physical problems exist with the time-dependent potential. In this case, previously developed algorithms cannot guarantee the unitary evolution of wave function. In this study, the Crank-Nicolson algorithm to maintain unitary evolution in time-dependent potential fields is developed with a fourth-order accurate Numerov algorithm used to achieve high-precision spatial discretization. A numerical test demonstrates that the new algorithm maintains the unitarity and stability of wavefunction evolution.
Quantum parameter estimation is a powerful theoretical tool for inferring unknown parameters in physical models from experimental data. The Jaynes-Cummings model is widely used in quantum optics, and describes the interaction between a two-level atom and a single-mode quantum optical field. Systematic research was performed on the estimation precision of atom-light coupling strength “g” in this model and the initial state was identified by which the estimation can achieve the best precision. Our results can improve the precision of quantum measurement with the Jaynes-Cummings model, and can be used for quantum metrology with other hybrid quantum systems.
An electronic payment protocol based on basic quantum mechanics is proposed. Some current loopholes in the classic payment systems pose security risks. The proposed scheme utilizes the correlations existing between entangled particles at the quantum level to implement the steps of signing, purchasing, and paying, whereby the validity of a signature is verified via quantum one-way functions and quantum SWAP test circuits. Payment information is transmitted through redundant particles in channel detection, thereby saving costs. Experimental results show that the proposed scheme has unconditional security as guaranteed by the basic principles of quantum mechanics and meets the basic requirements of payment systems.
In this research, by considering the two-qubit Ising model under Dzyaloshinskii-Moriya(DM) interaction as the research object, we investigate the effects of coupling strength, DM interaction and ambient temperature on the linear entropy uncertainty relation(EUR) in the system. Meanwhile, the variation of thermal entanglement with environment with ambient temperature is also discussed, and the relationship between thermal entanglement and linear EUR is compared. The results demonstrate that the systemic linear entropy uncertainty and thermal entanglement variance trend depends on the selection of environmental parameters, and their overall evolution behavior is roughly anti-related. Additionally, for a complete set of mutually unbiased bases, when different measurement base combinations are selected, the uncertainty relation lower bound will vary with the change in the number of measurement bases; moreover, the linear EUR can be transformed into an equation in special cases and its lower bound does not depend on the selection of a specific observation quantity. Compared with the previous quantum memory-assisted EUR, it provides a useful reference for precise measurement.
In general, the evolutionary time scale of group opinion formation can be considered significantly larger than the message propagation time scale on social media. However, this assumption does not hold for some extreme scenarios. Based on this, we established a noise threshold voter-UAU (unaware-aware-unaware) coupled model with an adjustable relative time scale between message propagation and opinion formation, and studied the collaborative interaction between two dynamics under different relative evolutionary rates and its effects on each other’s dynamical evolution . Both analyses based on mean field theory and Monte Carlo simulations demonstrate that a smaller time scale for message propagation relative to opinion formation is more favorable for the formation of a bistable phase, which is caused by the inherent differences between the two dynamics and their synergistic interaction. This study identified that the relative time scale between both dynamics not only affects the proportion of “positive” opinions in the final state, but also the critical basic reproduction number for message propagation that leads to a phase transition in the model. In particular, when the proportion of “positive” opinion is at different levels, their behaviors with respect to changes in the relative time scale between the two dynamics differ. When the proportion of “positive” opinions is high, a smaller time scale for message propagation leads to a higher proportion of “positive” opinions, and vice versa. This study addresses a gap in this field regarding the relative time scale’s impact on the dynamics of collaborative interaction coupling. Furthermore, it facilitates a more in-depth understanding of the profound influence of the relative time scale on the evolution dynamics of coupling dynamics.
The operating system is the core and foundation of the entire computer system. Its reliability and safety are vital because faults or vulnerabilities in the operating system can lead to system crashes, data loss, privacy breaches, and security attacks. In safety-critical systems, any errors in the operating system can result in significant loss of life and property. Ensuring the safety and reliability of the operating system has always been a major challenge in industry and academia. Currently, methods for verifying the operating system’s safety include software testing, static analysis, and formal methods. Formal methods are the most promising in ensuring the operating system’s safety and trustworthiness. Mathematical models can be established using formal methods, and the system can be formally analyzed and verified to discover potential errors and vulnerabilities. In the operating system, formal methods can be used to verify the correctness and completeness of the operating system’s functions and system safety. A formal scheme for embedded operating systems is proposed herein on the basis of existing formal verification achievements for operating systems. This scheme uses VCC (verified C compiler), CBMC (C bounded model checker), and PAT (process analysis toolkit) tools to verify the operating system at the unit, module, and system levels, respectively. The schema, upon being successfully applied to a task scheduling architecture case of a certain operating system, exhibits a certain universality for analyzing and verifying embedded operating systems.
An and-inverter graph (AIG) is a representation of electrical circuits typically passed as input into a model checker. In this paper, we propose an AIG structural encoding that we use to extract the features of AIGs and construct a portfolio-based model checker called Liquid. The underlying concept of the proposed structural encoding is the enumeration of all possible AIG substructures, with the frequency of each substructure encoded as a feature vector for use in subsequent machine-learning processes. Because the performance of model-checking algorithms varies across different AIGs, Liquid combines multiple such algorithms and selects the algorithm appropriate for a given AIG via machine learning. In our experiments, Liquid outperformed all state-of-the-art model checkers in the portfolio, achieving a high prediction accuracy. We further studied the effectiveness of Liquid from several perspectives.
This study aimed to investigate the impact of the inclusive jet double differential cross-section data retrieved from the ATLAS at the large hadron collider(LHC), at the center-of-mass energy of $ \sqrt{s}=2.76 $ TeV, on CT18NNLO parton distribution functions(PDFs) by applying the error PDF updating method package(ePump). First, the inclusive jet double differential cross-sections were calculated with non-perturbative correction using the CT18NNLO PDFs. It was observed that the theoretical predictions concurred with the experimental data. Thereafter, the correlation $ \mathrm{c}\mathrm{o}\mathrm{s}\phi $ was established between theoretical predictions for the inclusive jet double differential cross-sections and the CT18NNLO gluon PDFs. Finally, the ePump was applied to update the CT18NNLO PDFs; the differences between these data and the original global fitting data were investigated. The CT18NNLO gluon PDFs and ePump updated gluon PDFs were compared at $ Q=100 $ GeV, and it was found that the ATLAS 2.76 TeV inclusive jet double differential cross-sections data could slightly constrain the CT18NNLO gluon PDFs at both small and large $ x $ regions.
Isothermal remanent magnetization(IRM) is an important research topic in the analysis of material magnetic characteristics. However, owing to its low sensitivity, large volume, and high maintenance cost, the classical IRM measurement system cannot satisfy the practical requirements. Therefore, it is necessary to develop the IRM measurement system to attain a high sensitivity and small size. The magnetic field measurement technology based on the rubidium atomic ensemble has the advantages of high sensitivity and small size; accordingly, in this study, an IRM measurement system based on rubidium atomic ensemble is proposed. Here we focus on the design process regarding the magnetization device and residual magnetism detection device. More importantly, the IRM measurement system can successfully detect the soil samples collected from the Cherry River in Minhang Campus of East China Normal University. Our research demonstrates that the proposed detection system is easy to operate and maintain. Moreover, it has significant application prospects in the fields of environmental magnetism, geological exploration, and biological magnetic field measurement.
Optical polarization is a fundamental property of light, and it is therefore important to realize the high-fidelity transmission of optical polarization during the optical signal detection process. The optical beam splitter, as a conventional element to build the detection optical path /optical system, could significantly affect the polarization resolution capability of the entire detection system. Based on the theoretical analysis of optical transmission and reflection, polarization-preserving optical pathes with beam splitters can be designed to achieve both reflection and transmission polarization-preserving functions. Experimental data demonstrates that the optical path polarization fidelity is up to 95%. The polarization fidelity design scheme has characteristics including low cost, flexible adjustment and strong functionality, which provides further possibilities for the analysis and application of polarized light.
In this study, a combination of synchrotron vacuum ultraviolet photoionization experiments and quantum chemical calculations was employed to investigate the reaction mechanism between cyanomethyl radicals (·CH2CN) and propyne (C3H4) in high-temperature interstellar environments. The aim was to gain further insights into the formation mechanism of interstellar organic nitriles. By analyzing the photoionization mass spectra and photoionization efficiency curves, it was determined that the reaction may predominantly yield the open-chain isomers of 1-cyano-1,3-butadiene. Additionally, the reaction potential energy surface was explored at the B3LYP/cc-pVTZ level, revealing a barrierless addition of the cyanomethyl radical to acetylene. This addition mainly leads to the formation of gauche-E-1-cyano-1,3-butadiene and/or E-1-cyano-1,3-butadiene. Conversely, the more thermodynamically stable product, pyridine, exhibits a lower likelihood of formation.
This study proposes the nanobubble isotope separation method for the first time. The separation of light elements such as hydrogen, oxygen, carbon, and lithium is realized via experiments, and the separation coefficient is measured, which verifies the scientificity and effectiveness of the method. The study revealed that the isotope separation process of nanobubbles not only occurs when the rapid collapse adiabatic self-shrinkage forms nanobubbles, which causes the dissociation of surface molecules possibly owing to high temperature or nano-surface effects, such that the surface of the bubbles is negatively charged and adsorbs the surrounding medium, but also occurs in the subsequent isotope (ion) chemical exchange process between nanobubbles and specific solutions to form a separation system, which has a dual separation effect. Because the formation of nanobubbles is a rapid process, and the ion exchange between bubbles and solution is an isotopic resonance exchange chemical reaction, the process also quickly reaches equilibrium. The bubbles and solution determine that nanobubble isotope separation is a separation method with a short equilibrium time, overcoming the shortcomings of the usual chemical method balance time. Based on the prototype stand-alone machine for nanobubble separation, nanobubble isotope separation cascades are also designed to increase the separation effect to obtain isotopes of various abundances, thereby illustrating the possibility of industrial production.
The aim of this study was to update the bryophyte list of Dajinshan Island and provide the scientific basic data for in situ conservation. Based on five field investigations on the island, 67 species belonging to 38 genera in 20 families are reported herein. Compared with historical data for Dajinshan Island, 23 species are newly recorded in the island. Of these, 13 species are newly recorded in Shanghai. One epiphyllous liverwort species, Cololejeunea raduliloba Steph., is newly reported on Dajinshan Island. Taking into account climate change and the physiological and ecological characteristics of bryophytes, the changes in bryophyte species composition on Dajinshan Island are discussed. Our results highlight the importance of timely updating of a regional checklist, when conserving bryophyte biodiversity.
To ascertain the status and promote the utilization and sharing of type specimens in Herbarium of Shanghai Natural History Museum (SHM), the collecting information of normal specimens in SHM with type specimens in specimens of plant resource sharing platform and journal of plant taxonomy were compared, 418 type specimens were confirmed. There are 239 species belonging to 147 genera in 69 families, including 390 type specimens newly discovered. The quantity, type, species, dominant groups, collectiion location, collection time, and the name type specimen collector were collected and analysed in the herbarium.
The Cancer Genome Atlas (TCGA) and the International Cancer Genome Consortium (ICGC) databases were used to collect RNA sequence information from patients with hepatocellular carcinoma (HCC). The key genes involved in the immune response mechanism to HCC were screened using the non-negative matrix factorization (NMF) clustering method and weighted gene co-expression network analysis (WGCNA). Prognostic gene models were constructed using the least absolute shrinkage and selection operator (LASSO) regression analysis, and biological functions were analyzed using gene set enrichment analysis (GSEA). Subsequently, to assess the immune infiltration and the related functional differences between the patients in two different risk groups , we used single-sample gene set enrichment analysis (ssGSEA). We constructed column line graphs in combination with independent risk factors to predict overall patient survival time using the “RMS” package in R. Finally, preliminary clinical validation was performed using the Human Protein Atlas (HPA) database with real-time quantitative fluorescent PCR (RT-qPCR). In conclusion, we integrated the clinical characteristics of patients based on risk scores to construct a verifiable and reproducible column line chart, providing a reliable reference for the precise treatment of patients in clinical oncology.
This study was conducted to evaluate the effects of an 8-week altitude training on erythropoiesis, iron metabolism, and aerobic capacity in trained rowers. Twenty-eight trained rowers were divided into the altitude training (AT) and sea-level training (ST) groups. During the 8-week training camp, the training plan and load were similar in both groups. VO2peak, red blood cell count (RBC), reticulocyte% (RET%), hemoglobin (Hgb), and concentrations of serum erythroferrone (ERFE), ferritin (FER), and soluble transferrin receptor (sTfR) were measured before and after the 8-week training camp. It found that (1) compared with the pre-value, VO2peak and VO2peak to body mass (RVO2peak) increased significantly after the 8-week training in the AT group. No obvious differences in VO2peak and RVO2peak were observed in the ST group. The changes in VO2peak and RVO2peak between the two groups were significant (+9.41% vs +3.03%, p<0.05; +12.83% vs +0.80%, p<0.01). (2) After the 8-week training, the RBC, Hgb, and hematocrit (HCT) increased in the AT group but no statistical difference in the ST group. Changes in Hgb and HCT between the two groups were significant (+4.95% vs –3.21%, p<0.01; +6.48% vs –1.57%, p<0.01). A significant trend in RBC count change was observed between the two groups (+3.19% vs –3.61%, p=0.061). Compared with the pre-test values, no significant changes in RET% and reticulocyte hemoglobin equivalent (RET-He) were found in either groups after the 8-week training. The AT group showed significantly increased of low fluorescent reticulocyte (LFR) and reticulocyte production index (RPI) and significantly decreased medium fluorescent reticulocyte (MFR) and high fluorescent reticulocyte (HFR). There were no significant differences in RET%, RET-He, LFR, MFR, HFR, and IRF (immature reticulocyte fraction) in both groups. However, changes in RPI between both groups after the training camp was significant (+30.60% vs –4.52%, p<0.05). (3) In the AT group, no remarkable changes in serum ERFE, a significant decrease in serum FER, and an increase in serum sTfR and sTfR/lg(FER) levels were observed after of the 8-week training. In the ST group, there were no statistical changes in serum FER, sTfR, and sTfR/lg(FER) and significantly increased serum ERFE. Changes in serum FER, ERFE, sTfR, and sTfR/lg(FER) levels differed significantly between both groups (+17.99% vs +121.31%, p<0.05; –36.16% vs –2.96%, p<0.05; +82.77% vs –8.87%, p<0.05; +108.40% vs –6.96%, p<0.05). (4) There was a significantly positive association between the change in VO2peak and serum sTfR levels and ratio of sTfR to lg(FER) after the 8-week training. Therefore , eight weeks of AT appears to be more effective than ST in improving the oxygen delivery capacity of the blood and aerobic capacity in trained rowers. In the later stage of the 8-week AT, erythropoiesis remained active. Serum sTfR levels may be important in improving aerobic performance.
Dissolved organic carbon (DOC) is the largest reservoir of active organic matter in the ocean. Accurate characterization of the spatial and temporal patterns of DOC in large-river estuaries and neighboring coastal margins will help improve our understanding of biogeochemical processes and the fate of fluvial DOC across the estuary−coastal ocean continuum. By retrieving the absorption properties of colored dissolved organic matter (CDOM) in the dissolved organic matter (DOM) pool using machine learning models, and based on the correlation between CDOM absorption and DOC concentrations, we developed an ocean DOC algorithm for the GOCI satellite. The results indicated that the Nu-Supporting Vector Regression model performed best in retrieving CDOM absorption properties, with mean absolute percent differences (MAPD) of 32% and 8.6% for the CDOM absorption coefficient at 300 nm (aCDOM(300)) and CDOM spectral slope over the wavelength range of 275 ~ 295 nm (S275–295). Estimates of DOC concentrations based on the seasonal linear relationship between aCDOM(300) and DOC were achieved with high retrieval accuracy, with MAPD of 11% and 14% for the training dataset using field measurements and validation datasets on satellite platforms, respectively. Application of the DOC algorithm to GOCI satellite imagery revealed that DOC levels varied dramatically at both seasonal and hourly scales. Elevated surface DOC concentrations were largely associated with summer and lower DOC concentrations in winter as a result of seasonal cycles of Yangtze River discharges. The DOC also changed rapidly on an hourly scale due to the influence of the tide and local wind regimes. This study provides a useful method to improve our understanding of DOC dynamics and their environmental controls across the estuarine −coastal ocean continuum.
This study investigated the ecological assets of Yingpu Street, Shanghai, using historical aerial imagery data. Using various methods, such as the ecological assets balance sheet and correlation analysis, changes in the ecological assets of Yingpu from 2000 to 2021, as well as the underlying mechanisms behind these changes, were analyzed. The results showed that, in 2021, the ecological assets of Yingpu Street mainly consisted of arable land, wetlands, and grasslands with an overall moderate quality. The total ESV(ecological service value) was 9.39 × 106 CNY (Chinese Yuan) and was mainly due to contributions from water conservation and waste treatment services. The ecological assets of Yingpu decreased significantly during the period from 2000 to 2021, with the stock and flow decreasing by 33.07 and 22.97%, respectively. Urban construction led to a reduction in farmland ecological assets and was the major contributor to the overall decline observed. Returning farmlands to forests and grasslands played a key role in the substantial increase of forest and grassland ecological assets. The ESV of Yingpu was negatively correlated with night light intensity, population, GDP (gross domestic product), land surface temperature, and DEM (digital elevation model) (p < 0.001), but was positively correlated with slope (p < 0.001).
Extreme events such as typhoons can change mudflats by tens of centimeters. It is important for coastal management and ecosystem maintenance to recognize changes in accretion-erosion during typhoons and to understand the mechanisms driving it. In this study, Unmanned Aerial Vehicle (UAV) photogrammetry based on the Structure-from-Motion (SfM) algorithm was used to generate Digital Elevation Models (DEM) of a mudflat in Eastern Chongming, Yangtze Estuary, before and after the passage of Typhoon “In-Fa” (July 2021). Hydrodynamic measurements were conducted from bare flats to marshes to explore the mechanisms of DEM changes. Changes in accretion-erosion observed by UAV photogrammetry presented an obvious zonation of eroded bare flats and accreted marshes. The accuracy of the DEMs is 4.1 cm. Under the impact of the typhoon, the erosion of the bare flat and the accretion of the marsh have a amplitude of ±32 cm. During typhoons, the wave height and water depth in the bare flat increases to the condition of wave breaking, and the surface sediment is eroded and carried by rising tides. But in marshes, the sediment carrying capacity of water columns decreases, and the sediments are deposited. Consequently, the mudflat presents an obvious zonation of accretion-erosion. This study provides a new perspective for deeply understanding the impact of typhoons on the accretion-erosion of mudflats by combining UAV photogrammetry and hydrodynamic measurements.
This study presents an OpenRank-based method for evaluating open-source contributions, designed to address the challenge of quantifying student contributions in open-source projects. Taking the “Open-Source Software Design and Development” course as a case study, we developed a method to assess student contributions in open-source practice. The OpenRank algorithm, which is based on developer collaboration networks, evaluates student contributions in discussions, problem-solving, and coding. Experimental results indicate that OpenRank not only aligns with traditional grading methods but also provides a more comprehensive view of student contributions. Combining OpenRank with traditional grading offers a more scientific and thorough evaluation of student contributions and skills in open-source projects.
This study proposes a multi-graph knowledge tracing method integrated with a self-attention mechanism (SA-MGKT), The aim is to model students’ knowledge mastery based on their historical performance on problem-solving exercises and evaluate their future learning performance. Firstly, a heterogeneous graph of student-exercise is constructed to represent the high-order relationships between these two factors. Graph contrastive learning techniques are employed to capture students’ answer preferences, and a three-layer LightGCN is utilized for graph representation learning. Secondly, we introduce information from concept association hypergraphs and directed transition graphs, and obtain node embeddings through hypergraph convolutional networks and directed graph convolutional networks. Finally, by incorporating the self-attention mechanism, we successfully fuse the internal information within the exercise sequence and the latent knowledge embedded in the representations learned from multiple graphs, leading to a substantial enhancement in the accuracy of the knowledge tracing model. Experimental outcomes on three benchmark datasets demonstrate promising results, showcasing remarkable improvements of 3.51%, 17.91%, and 1.47% respectively in the evaluation metrics, compared to the baseline models. These findings robustly validate the effectiveness of integrating multi-graph information and the self-attention mechanism in enhancing the performance of knowledge tracing models.
In recent years, massive open online courses (MOOCs) have become a significant pathway for acquiring knowledge and skills. However, the increasing number of courses has led to severe information overload. Knowledge concept recommendation aims to identify and recommend specific knowledge points that students need to master. Existing research addresses the challenge of data sparsity by constructing heterogeneous information networks; however, there are limitations in fully leveraging these networks and considering the diverse interactions between learners and knowledge concepts. To address these issues, this study proposes a novel method, heterogeneous learning behavior-aware knowledge concept recommendation (HLB-KCR). First, it uses metapath-based random walks and skip-gram algorithms to generate semantically rich metapath embeddings and optimizes these embeddings through a two-stage enhancement module. Second, a multi-type interaction graph incorporating temporal contextual information is constructed, and a graph neural network (GNN) is employed for message passing to update the nodes, obtaining deep embedded representations that include time and interaction type information. Third, a semantic attention module is introduced to integrate meta-path embeddings with multi-type interaction embeddings. Finally, an extended matrix factorization rating prediction module is used to optimize the recommendation algorithm. Extensive experiments on the large-scale public MOOCCubeX dataset demonstrate the effectiveness and rationality of the HLB-KCR method.
In massive open online courses (MOOCs), knowledge concept recommendation aims to analyze and extract learning records from a platform to recommend personalized knowledge concepts to users, thereby avoiding the inefficiencies caused by the blind selection of learning content. However, existing methods often lack comprehensive utilization of the multidimensional aspects of user behavior data, such as sequential information and complex interactions. To address this issue, we propose STRec, a sequence-aware and multi-type behavioral data driven knowledge concept recommendation method for MOOCs. STRec extracts the sequential information of knowledge concepts and combines it with the features produced by graph convolutional networks using an attention mechanism. This facilitates the prediction of a user's next knowledge concept of interest. Moreover, by employing multi-type contrastive learning, our method integrates user-interest preferences with various interaction relationships to accurately capture personalized features from complex interactions. The experimental results on the MOOCCube dataset demonstrate that the proposed method outperforms existing baseline models across multiple metrics, validating its effectiveness and practicality in knowledge concept recommendation.
Automated content reviews on digital educational resources are urgently in demand in the educational informatization era. Especially in the applicability review of whether educational resources exceed the standard, there are problems with knowledge which are easy to exceed national curriculum standards and difficult to locate. In response to this demand, this study proposed a review method for educational resources based on the collaboration of an educational knowledge graph and a large language model . Specifically, this study initially utilized the ontology concept to design and construct a knowledge graph for curriculum education in primary and secondary schools. A knowledge localization method was subsequently designed based on teaching content generation, sorting, and pruning, by utilizing the advantages of large language models for text generation and sorting tasks. Finally, by detecting conflicts between the core knowledge sub-graph of teaching content and the knowledge graph teaching path, the goal of recognizing teaching content that exceeded the national standard was achieved. Experimental results demonstrate that the proposed method effectively addresses the task of reviewing exceptional standard knowledge in educational resource content. This opens up a new technological direction for educational application based on the knowledge graph and large language model collaboration.
Drawing on constructivism and competency-based theory, this paper proposes an online learning system design method based on a knowledge graph, which breaks the traditional knowledge structure and builds a multi-dimensional competence framework of knowledge and skills with the goal of improving competence. A learning system with a knowledge graph as the underlying logic and linked digital learning resources was built. Teaching practice and empirical research were then carried out. First, the learning system was verified with a questionnaire. Second, taking the ability to “read English academic papers” as the learning task, experimental and control groups were created to evaluate the understanding of knowledge and skills, memory level, and comprehensive application ability of the participants. The results showed that the effectiveness and usability of the learning system were higher in the experimental group than in the control group in terms of total, knowledge, skill, and ability scores. Among these, total and ability scores showed significant differences, indicating that the system played a role in promoting the effect of online learning.
With the rapid development of artificial intelligence technology, large language models (LLMs) have demonstrated strong abilities in natural language processing and various knowledge applications. This study examined the application of Chinese large language models in the automatic labelling of knowledge graphs for primary and secondary school subjects in particular compulsory education stage morality and law and high school mathematics. In education, the construction of knowledge graphs is crucial for organizing systemic knowledge . However, traditional knowledge graph methods have problems such as low efficiency and labor-cost consumption in data labelling. This study aimed to solve these problems using LLMs, thereby improving the level of automation and intelligence in the construction of knowledge graphs. Based on the status quo of domestic LLMs, this paper discusses their application in the automatic labelling of subject knowledge graphs. Taking morality and rule of law and mathematics as examples, the relevant methods and experimental results are explained. First, the research background and significance are discussed. Second, the development status of the domestic large language model and automatic labelling technology of the subject knowledge graph are then presented. In the methods and model section, an automatic labelling method based on LLMs is explored to improve its application in a subject knowledge graph. This study also explored the subject knowledge graph model to compare and evaluate the actual effect of the automatic labelling method. In the experiment and analysis section, through the automatic labelling experiments and results analysis of the subjects of morality and law and mathematics, the knowledge graphs of the two disciplines are automatically labeled to achieve high accuracy and efficiency. A series of valuable conclusions are obtained, and the effectiveness and accuracy of the proposed methods are verified. Finally, future research directions are discussed. In general, this study provides a new concept and method for the automatic labelling of subject knowledge graphs, which is expected to promote further developments in related fields.
Advancements in machine-learning technology has enabled automated program-repair techniques that learn human patterns of erroneous-code fixing, thereby assisting students in debugging and enhancing their self-directed learning efficiency. Automatic program-repair models are typically based on either manually designed symbolic rules or data-driven methods. Owing the availability of large language models that possess excellent natural-language understanding and code-generation capabilities, researchers have attempted to use prompt engineering for automatic program repair. However, existing studies primarily evaluate commercial models such as Codex and GPT-4, which may incur high costs for large-scale adoption and cause data-privacy issues in educational scenarios. Furthermore, these studies typically employ simple prompt forms to assess the program-repair capabilities of large language models, whereas the results are not analyzed comprehensively. Hence, we evaluate two representative open-source code large language models with excellent code-generation capability using prompt engineering. We evaluate different prompting methods, such as chain-of-thought and few-shot learning, and analyze the results comprehensively. Finally, we provide suggestions for integrating large language models into programming educational scenarios.
Against the backdrop of the national new engineering education initiative, early C++ teaching has failed to meet the requirements of high-level sophistication, innovation, and challenges. Furthermore, issues such as fragmented knowledge points, difficulty in integrating theory with practice, and single-perspective bias are prevalent in this field. To address these problems, we propose an innovative teaching model that effectively integrates QT(Qt Toolkit) and C++ by merging the two courses. This model facilitates the teaching process via a course knowledge graph deployed on the Zhihuishu platform. The breadth of teaching is expanded by effectively linking course knowledge points, integrating and sharing multimodal teaching resources, enhancing multiperspective learning, showcasing the course’s innovative nature, and avoiding single-perspective bias. Simultaneously, the depth of teaching is increased through the construction of a knowledge graph that integrates QT and object-oriented programming (C++), organically combining the knowledge points of both courses. This approach bridges the gap between theory and practice by enhancing the course’s sophistication and level of challenge. Consequently, this study pioneers the reform of C++ teaching by providing valuable references and insights for programming courses under the new engineering education framework.
In the digital education application domain, developers of platforms such as online classrooms face the challenges of privacy issues and existing datasets’ insufficient size in their pursuit of data-driven optimization. To address this, a set of heterogeneous data models adapted to the characteristics of education were constructed, and corresponding data generation tools (E-Tools) that can be used to simulate data interactions in complex educational scenarios were implemented. Experimental results have shown that the tool can maintain an efficient data generation speed (64–74 $ {\rm{MB}}\cdot {{\rm{s}}^{-1}} $) under a variety of data sizes, demonstrating good linear scaling ability, which validates the model’s effectiveness and the tool’s ability to generate larger data volumes. A heterogeneous data query load reflecting students’ learning behaviors was also designed to provide strong support for performance evaluation and the education platform’s optimization.
Conventional education big data management is faced with security risks such as privacy data leakage, questionable data credibility, and unauthorized access. To avoid the above risks, a novel type of education big data security management and privacy protection method, Algorithm for security management and privacy protection of education big data based on smart contracts (ASPES), is proposed. It integrates an improved key splitting and sharing algorithm based on the secret sharing of Shamir, a hybrid encryption algorithm based on SM2-SHA256-AES, and a smart contract management algorithm based on hierarchical data access control. Experiments are conduced on the real dataset of MOOCCube and the results indicate that the execution efficiency and security of ASPES are significantly improved when compared with the state-of-the-art methods, which can effectively store and manage education big data and realize the reasonable distribution of educational resources. By embedding smart contracts into the blockchain and inputting operations like data reading and writing into the blockchain, ASPES can optimize the management path, improve management efficiency, ensure the fairness of education, and considerably improve the quality of education.
Query optimization can significantly enhance the analysis efficiency of online analytical processing (OLAP) database systems for massive educational data, providing fast and accurate data support for intelligent educational systems. The optimizer mainly consists of three modules: cardinality estimation, space enumeration, and cost models. Specifically, cardinality estimation determines the results of the cost model and guides the selection of query plans. Therefore, the evaluation of the cardinality estimation module of the optimizer plays a crucial role in the optimization of OLAP database systems. This study designs and implements an effective workload generation tool based on primary key-driven diversified data distribution and data relationship construction. The tool includes data generation technology with custom relationships, workload template generation technology based on finite state machines, and parameter instantiation technology driven by target cardinality. Experiments were conducted on three databases: OceanBase, TiDB, and PostgreSQL, analyzing the issues of their optimizers and providing suggestions.
In the modern educational environment, efficient and reliable data management systems are essential for the operation of online education platforms and student information management systems. With the continuous growth of educational data and the increase in the frequency of multi-user access, database systems face the challenge of high throughput requirements owing to concurrent conflict operations. Among the many concurrency control strategies, the lock-based control strategy is commonly used in database systems. However, the blocking caused by locks affects the performance of concurrent execution of transactions in the database. Existing work mainly reduces lock contention by scheduling the execution order between transactions or optimizing stored procedures. To improve transaction throughput further, this study conducts blocking analysis and cost modeling within transactions based on lock avoidance, and proposes an intra-transaction scheduling strategy. The scheduling cost is estimated by analyzing the blocking of the workload, and then the operation order is exchanged to a limited extent within the transaction according to certain rules to reduce the delay caused by lock blocking, thereby improving performance. Finally, comparing the conventional and proposed scheduling strategies, the latter is verified to improve throughput and reduce the average transaction delay.
This study introduces and implements a local, lightweight, intelligent teaching-assistant system. Using the IPEX-LLM (Intel PyTorch extention for large language model) acceleration library, the system can efficiently deploy and execute large language models that are fine-tuned using the QLoRA (quantum-logic optimized resource allocation) framework on devices with limited computational resources. Combining this with enhanced retrieval techniques, the system provides flexible course customization through four major functional modules: intelligent Q&A, automated question generation, syllabus creation, and course PPT generation. This system is intended to assist educators in improving the quality and efficiency of lesson preparation and delivery, safeguarding data privacy, supporting personalized student learning, and offering real-time feedback. Performance tests exemplified by the optimized Chatglm3-6B model show the rapid inference capability of the system via the processing of a 64-token output task within 4.08 s in a resource-constrained environment. A practical case study comparing the functionality of the system with native Chatglm-6B and ChatGPT 4.0 further validates its superior accuracy and practicality.
Building an intelligent education platform is an important process in the promotion of the intelligence of education. However, the artificial intelligence model on which intelligent education platforms rely consumes a large amount of electricity and energy during its training process, therefore, it is of great significance to carry out a short-term power load prediction for building an intelligent education platform. However, the major issue is the weak correlations between some attributes and power load data when considering multiple attributes during short-term power load forecasting, and the Transformer cannot capture the temporal correlation of power load data, which leads to a lack of accuracy in power load forecasting. Therefore, a short-term power load forecasting model, SF-Transformer is proposed that is based on the SR (Székely and Rizzo) distance correlation coefficient, fusion temporal localization coding and Transformer. The SF-Transformer filters the attributes that affect the power load data by using the SR distance correlation coefficient and selects the attributes that have higher SR distance correlation coefficients with the power load data. The SF-Transformer adopts fusion time localization coding, thereby combining global time coding and local position coding, which helps the model to comprehensively obtain time and localization information regarding power load data. Experiments conducted on the dataset show that SF-Transformer has a lower RMSE (root mean square error) and MAE (mean absolute error), compared with those of other power load forecasting models over two-time durations.
Recently, with the development of artificial intelligence, visual recognition, and edge intelligent computing, intelligent patrol or online monitoring technology based on visual recognition has found important applications in conventional campus security, laboratory safety monitoring, and industrial production operation and maintenance monitoring. Campus security and laboratory safety monitoring aim to protect the personal safety of students and teachers and avoid incidents such as campus bullying or laboratory safety accidents. Industrial production operation and maintenance monitoring is the identification and early warning of hidden dangers and defects in equipment or operational behaviors in industrial scenarios to avoid huge losses caused by faults and hazards. In security and production operation monitoring tasks, using manual methods for real-time detection can be labor-intensive and inefficient, and human negligence that leads to undetected dangers could occur. Therefore, based on the needs of campus security and safety or industrial production operation and maintenance monitoring, this study designs and implements an intelligent patrol system based on microservices and the operation and maintenance monitoring of industrial substations. The system does not require excessive manual participation and can automatically conduct patrols, identify dangers, and provide early warnings. Subsequently, the system adopts an advanced scheduling system that takes only 3–5 min to perform one patrol, considerably improving the efficiency of hazard detection during patrols. The system can be applied to intelligent patrols of campus and industrial substations, security, and safety.