This paper studies the oscillation and asymptotic properties of delay differential equations with damping and sub-linear neutral terms using the generalized Riccati transformation technique and the mean value theorem. After analyzing the function of the cross-link between the condition $\int^\infty_{t_0}(\frac{1}{R(t)})^{\frac{1}{\gamma}}{\rm{d}}t=\infty$ and the relationship of parameters $\gamma$ and $\beta$ in the differential equations oscillation, the sufficient conditions for the existence of vibration solutions are provided to extend the existing results in the cited literature. Lastly, some applications are given to illustrate the significance of these results.
This study builds on the method developed by Wang for images of multilinear polynomials on algebra of upper triangular $ 2\times2$ matrices. The main goal of the paper is to give a description of the images of multilinear polynomials on algebra of upper triangular $ 3\times 3$ matrices, thereby partly solving the Fagundes and Mello conjecture, a variation of the famous Lvov-Kaplansky conjecture.
In this paper, we study the long-time dynamic behavior of solutions for the non-autonomous classical reaction-diffusion equation with nonlinear boundary conditions and fading memory, where the internal nonlinearity and boundary nonlinearity adheres to polynomial growth of arbitrary order as well as the balance condition. In addition, the forcing term is translation bounded, rather than translation compact, by use of contractive function method and process theory. The existence and the topological structure of uniform attractors in $L^{2}(\Omega)\times L_\mu^2(\mathbb R^+; H_{0}^1(\Omega))$ are proven. This result extends and improves existing research in the literature.
In this paper, we investigate the uniqueness and distribution of zeros of a class of difference polynomials by using Nevanlinna’s value distribution theory. We obtain results about the uniqueness of the difference polynomials $P(f)\sum_{i=1}^{k}t_{i}f(z+c_{i})$ and the distribution of zeros of the difference polynomials $P(f)(\sum_{i=1}^{k}b_{i}(z)f(z+c_{i}))^s-b_0(z)$ , where $f(z)$ is a transcendental entire function of finite order, $c_i, t_i\;(i=1, 2, \cdots,k)$ are non-zero constants, and $b_i(z)\;(i=0, 1, \cdots,k)$ are small functions with respect to $f(z)$ .
The development of wireless communication has made spectrum resources increasingly scarce. Existing spectrum resources, however, are not currently used in an efficient way. This contradiction can usually be attributed to the problem created by static spectrum allocation strategies. Cognitive radio (CR) is widely regarded as a feasible solution to solve the problem of static spectrum allocation. In recent years, deep learning, an emerging field of machine learning, has contributed to a number of notable research and application achievements. It has become one of the driving technologies behind artificial intelligence. In this paper, we investigated the application of deep learning to CR; this includes the development of cognitive radio and deep learning as well as the usage of deep learning models in key technologies for CR (such as spectrum prediction, spectrum environment sensing, signal analysis, etc.). Lastly, we summarize and discuss conclusions from this review.
Modified Newtonian dynamics is a major competitor of dark matter theory and contains not only a gravitational constant but also an acceleration constant. Based on a circular orbit solution for a two-body problem, this paper is devoted to studying a plane circular restricted three-body problem using modified Newtonian dynamics. We work out the Lagrangian points and the Hill curves akin to those observed in Newtonian dynamics. In contrast, however, the location and number of Lagrangian points, as well as the profile of the Hill region, are dependent on both the acceleration constant and the mass ratio of the main celestial bodies. These findings reveal a new avenue for testing modified Newtonian dynamics.
Quantum theory has the characteristics of superposition, entanglement, incompatibility, and interference, which make it an excellent modeling framework. For the purpose of sentence matching, we explore the ability of quantum theory as a framework to capture sentence meaning and model semantic processes. We use quantum states to construct the semantic Hilbert space and calculate the fidelity of information during sentence transformation. The similarity of sentences is subsequently determined by using word embedding technology to represent words or concepts in semantic vector spaces. Simulation data showed that the proposed method achieved better results than traditional methods for sentence matching datasets constructed on real business scenarios. Hence, this paper provides a new idea for similarity research of multiple sentences and introduces a breakthrough in interdisciplinary research between computer science and quantum theory, in line with current research trends.
There is an inconsistency between the Hubble parameter obtained from local measurements and model-based parameters obtained from cosmic microwave background (CMB) measurements. This inconsistency motivated us to consider new cosmological models based on $\Lambda {\rm{CDM}}$ (Lambda Cold Dark Matter Model), such as a large-scale Lorentz violation model with non-vanishing spatial curvature. The degeneracy among the spatial curvature, cosmological constant, and cosmological contortion distribution makes the model viable for interpretation of the observation data. By comparing the luminosity distance modulus and redshift with the model prediction and calculating the change in matter density as well as the cosmological constant over time, we limit the spatial curvature density to a certain range. Accordingly, we discuss the performance of the large-scale Lorentz violation model with non-vanishing spatial curvature under these constraints.
In this study, the low-energy effective field theory approach is used to analyze nuclear matter and a zero-temperature Fermi system. By solving the Bethe-Goldstone equation (BGE) in the 1S0 channel, we obtain the closed-form Brückner G matrix and derive its renormalized non-perturbative form. Upon selecting values for relevant parameters, a number of physical issues are analyzed with the Brückner G matrix, such as pairing and single particle energy of a Fermi system in the density background. Lastly, the framework and results are compared with those published in the literature.
In this paper, we use time-resolved Kerr rotation(TRKR) spectroscopy to study the electron spin coherence dynamics of a wurtzite (0001) plane n-CdS single crystal at different temperatures and wavelengths. Two types of electronic spin signals are observed in this material at low temperatures. One is a long-lived spin signal at relatively long pump probe wavelengths, where the spin dephasing time exceeds 4.8 ns at 5 K and decreases with increasing temperature. The other is a short-lived spin signal at relatively short pump probe wavelengths, where the spin dephasing time is about 40 ps and persists up to room temperature; in this case, the spin signal is largely independent of temperature. Studies have shown that long-lived spin signals can be attributed to localized electrons, while short-lived spin signals can be attributed to conduction delocalized electrons.
High-order harmonic generation (HHG) may occur during the interaction between an intense laser field and an atom or molecule; HHG has become an important xtreme utility vehicle(XUV) light source which can be used to probe atomic and molecular structures. In this paper, we investigate the effect of the radial distribution of electric density on the HHG spectra by calculating the HHG spectrum of noble atomic gases in a polarized laser field using s and p orbital functions as ground state wave functions. The results show that the form of the wave function does not influence the cutoff value of the harmonic spectrum, which is determined by the ionization threshold energy and the laser intensity. However, different types of orbital wave functions do lead to different envelopes for the HHG spectrum. In particular, there is an additional dip in the plateau area for the p orbital case compared with the spectrum for the s orbital case. By analyzing the formula for the HHG spectrum, we attributed the dip position on the HHG spectrum to the density distribution of the ground state wave function in momentum space. This work may shed light on applications for using the HHG spectrum to visualize atomic orbitals.
In this paper, we study the strong and weak coupling between excitons of a WSe2 monomolecular thin film and a light field in a self-made Fabry–Pérot semiconductor microcavity at 300 K. The optical properties of the sample were studied using a micro-fluorescence / white light reflection spectroscopy system with integrated angular resolution; the formation of exciton polaritons was observed in the strong coupling region, corresponding to a Rabi splitting energy of 46.7 meV. The theoretical fitting results agree with the experimental phenomena. This lays the foundation for further research on the coherent properties of exciton polaritons, and the study also provide ideas for the application of industrial optoelectronic devices in the future..
In this paper, whispering gallery mode (WGM) excited in a two-dimensional electromagnetic Helmholtz cavity are studied using a rigorous, generalized dual series approach. The excitation wavelengths of several whispering gallery modes are given, and the dependence of electromagnetic whispering gallery modes on the angle of incidence and the angular width of opening cavities is investigated. It was found that WGM are very sensitive to slight changes in wavelength or the angular width of the opening; at the same time, WGM can be excited across a wide range of incident angles given a fixed orientation angle of the cavity. This shows that the angular width of the opening has a significant influence on the performance of Helmholtz cavities and hence is a key parameter in their design. On the other hand, given the lack of sensitivity to the incident angle, no particular specification is needed when designing an artificially structured electromagnetic material using these Helmholtz cavities; accordingly, the fabrication difficulty is relatively low.
The square potential barrier is an ideal model for investigation of quantum tunneling. We simulate the square potential barrier by using the dipole potential for the interaction between an atom and a blue-detuned far-off-resonant super-Gaussian beam, as well as the ponderomotive potential for the interaction between an electron and a super-Gaussian beam. A comparison between the numerical results for scattering by the super-Gaussian potential barrier and the analytical results for scattering by a square potential barrier shows that a super-Gaussian beam with an order exceeding 20 could simulate a square potential barrier accurately. We also show that two super-Gaussian beams could be used to study the resonant quantum tunneling effect. In summary, our results could be applied to an experimental investigation of quantum tunneling through a square potential barrier.
Two-dimensional materials have been used in applications across a variety of fields; transition metal dichalcogenides(TMDCs), in particular, are a candidate for use in the field of optoelectronics due to the presence of a band gap. In this paper, WS2 monolayers prepared by micro-mechanical exfoliation are transferred to two micro-period electrode structures. We found that the photoluminescence of the material is modulated by external bias. We studied the effects of bias on the photoluminescence of the WS2 monolayer at room temperature and low temperature. The corresponding characteristics and physical mechanisms of the photoluminescence(PL) spectra, moreover, are analyzed and discussed. With the application of bias to modulate the optical properties of the WS2 monolayer, it is expected that the technology can be applied to many photoelectric products, including field effect transistors, photodetectors, flexible electronic devices, and heterojunction devices.
The objective of this paper is to propose a mathematical theory that can describe the non-Markovian characteristics of the network spreading process, thereby establishing theoretical support for controlling the propagation of diseases or rumors in the real world. According to the second-order mean-field approximation method and the concept of idle edges, a series of partial differential equations are presented that can be used to solve the non-Markovian spreading dynamics of a susceptible-infected (SI) model in complex networks. By comparing the simulation outputs with the theoretical results, this mathematical method can accurately predict the spreading process of the SI model on complex networks. The theory, moreover, can be used to predict the average time for a single node to be infected. The correctness and accuracy of the theory is verified by experimental simulation results.
Intonation is the tone of speech, which is formed by variations in pitch and emphasis; it is one of the characteristics of human emotion transmission. By adjusting the intonation parameters to change the length and height of certain words in discourse, the controlled intonation can mimic the effect of singing; this approach, in turn, can be used to address the lack of research on voice synthesis in singing. The cepstrum method is used to extract the pitch frequency, the LPC (linear predictive coding) method is used to estimate the formant, and a high-order polynomial is used to fit the pitch of the voice; the fitting function is then adjusted in real time to form the tone required to achieve the objective of singing. Given two basic speech parameters, pitch frequency and formant, combined with the mathematical nature of pronunciation, this paper uses an intuitive mathematical method to synthesize the effect of singing; using this method, the original voice and the synthetic voice reach an overall recognition rate of 87.6%. The result of this synthesis shows that by adjusting the parameters of speech synthesis, we can achieve greater control over voice singing.
The photoanode of I-doped TiO2 nanotube arrays (ITNA) prepared by anodization exhibited better degradation performance than TNA. The planar photocatalytic fuel cell (p-PFC) obtained by combining ITNA and Pt electrodes achieved a maximum decolorization rate of 93.1% when the concentration of methylene blue (MB) was 6 mg·L–1and the electrode plate spacing was 1.0 cm. The degradation of MB occurred on the surface of ITNA, which was a rate-limiting step. Compared to other structures, p-PFC had a higher photocatalytic performance and better production of h+ and ·OH, while degrading MB and other organics.
In this study, four typical typhoons that significantly affected Shanghai were selected based on their respective intensity and the water level along the Shanghai Coast. The RMW (Radius of Maximum Winds) formula, moreover, was determined using in-situ data from recent typhoons. The typhoon model was built and validated using in-situ wind speeds from the four typhoons selected. The peak wind speed and the forward peak wind speed along the Shanghai Coast were calculated, case by case, during all typhoons over the period from 1949 to 2014 as well as the four typical typhoons selected. Finally, the range and distribution of the peak (forward peak) wind speed were quantitatively studied.
In this study, we analyzed the spatial and temporal variations in the residual water level of the Changjiang Estuary and evaluated the respective causes. To achieve this objective, we used data from the hourly water level at the Chongxi, Nanmen, and Baozhen hydrological stations in the Changjiang Estuary; daily river discharge levels at the Datong hydrological station; and wind speed and direction at the Chongming eastern shoal weather station in 2016 and 2017. The results showed that the residual water level was the highest at Chongxi station and the lowest at Baozhen station among the three hydrological stations in each month. The drops in residual water level among the hydrological stations became smaller during low river discharge and tended to become larger during high river discharge. Higher levels of river discharge were associated with a larger drop in the residual water level. In 2016, the residual water levels at the Chongxi, Nanmen, and Baozhen hydrological stations were lowest in February with values of 2.09, 1.96, and 1.93 m, respectively; similarly, the residual water levels were the highest in July with values of 2.91, 2.62, and 2.50 m, respectively. The residual water level was mainly affected by river discharge, while the wind was also an important influencing factor in the variations observed. Southerly wind made the residual water level decrease, and northerly wind made it increase. In 2017, the minimum monthly mean residual water level occurred in December with values of 2.04, 1.91, and 1.87 m at the Chongxi, Nanmen, and Baozhen hydrological stations, respectively; this coincided with the lowest annual river discharge observed during the same period. The maximum monthly mean residual water level occurred in October with values of 2.79, 2.58, and 2.49 m at the Chongxi, Nanmen, and Baozhen hydrological stations, respectively. Although the river discharge was lower in October than the one in July by 24214 m3/s, the residual water level was higher in October than that in July. The explanation for this phenomenon is the persistent strong northerly wind observed in middle to late October, which produced strong landward Ekman water transport, and resulted in the water level rise. The spatial and temporal variation in the residual water level of the Changjiang Estuary is remarkable, and should be considered in engineering design and theoretical research.
Qingcaosha Reservoir is the main water source for Shanghai, providing approximately 55% of its high-quality raw water needs, and effectively guarantees the safety of water supply for Shanghai. The waters near the Qingcaosha Reservoir experience saltwater spillover from the North Branch into the South Branch; the nearby waters, moreover, suffer from direct saltwater intrusion from the open sea. In this study, a large number of measured salinity data in the upstream and downstream sluice was used to statistically analyze the characteristics of direct saltwater intrusion near the Qingcaosha Reservoir waters in recent decades. The analysis results show that direct saltwater intrusion neat the Qingcaosha Reservoir waters in recent decades was closely related to the river discharge, tide, and wind. There were a total of 16 instances of direct saltwater intrusion at the upstream sluice that occurred from September to March of the following year; likewise, there were a total of 41 instances of direct saltwater intrusion at the downstream sluice that occurred from September to May of the following year. The direct saltwater intrusions at the upstream and downstream sluices appeared primarily in December, January, and February of each year. We found that saltwater intrusions occurred most commonly when the river discharge was less than 18 000 m3/s during neap tide and middle tide (after neap tide) accompanied by persistent northerly or northwesterly winds. We found that the strength and duration of the northerly or northwesterly winds in the days preceding saltwater intrusion had an important role on direct saltwater intrusion.
Twenty-two surface sediments collected from the Changjiang Estuary and neighboring shelf were subjected to particle-size measurements, with the intent of understanding the implications for provenance, transport, and depositional dynamics. The results showed that Changjiang River-derived sediments, relict sands, and Yellow River-derived sediments were the primary sources controlling the magnetic properties of sediments in the study area. The three areas, however, exhibited different spatial distributions. Spatial variations of magnetic parameters, including magnetic susceptibility (χ), saturation isothermal remanent magnetization (SIRM), hard isothermal remanent magnetization (HIRM), and anhysteretic susceptibility (χARM), suggest that sediments from the Changjiang River are transported towards the south and southeast when they move out of the river mouth. According to bi-plots of SIRM versus χ and S-ratio (S–100) versus SIRM, the > 63 μm fraction is roughly bounded by the 30 m isobaths that separates the Changjiang River sediment from the relict sands on the shelf. The < 16 μm fraction is derived mainly from the modern fluvial sources of the Changjiang and Yellow Rivers; in particular, the Changjiang River-derived sediment dominates the inner estuary and the Yellow River-derived sediment dominates the northern coast of the shelf. The other areas of the shelf are characterized by mixed sources of the < 16 μm fraction, with a majority being Changjiang River-derived sediment. Spatial variations of particle size compositions and magnetic properties reflect the role of hydrodynamic sorting on particle size as well as mineral density; this results in differences in magnetic properties among the sedimentary units as well as the contribution of different sized fractions to the bulk SIRM values. Particle size separation could reduce the effect of particle size on bulk magnetic properties and lead to more precise provenance discrimination. Our results have great potential in the study of geomorphological changes and quantitative source identification in delta environments.
Flux footprint analysis is an important step in studying the carbon, water vapor, and heat flux exchange of land-atmosphere interactions based on the eddy covariance (EC) method. In this research, we used the flux source area model (FSAM) to investigate seasonal flux footprints with different wind directions and atmospheric conditions on the basis of half-hourly EC measurements throughout 2018. The results showed that: ① The flux footprint area changes with the seasons. The largest flux footprint area, ordered highest to lowest, was found in autumn, summer, spring, and winter under stable stratification; meanwhile, under unstable stratification, the flux footprint area did not change significantly between seasons. The daily variation in the footprint, moreover, was obvious and the footprint was found to be larger comparatively at nighttime than that observed during the daytime. ② The flux source area under non-prevailing wind conditions was larger than that under the prevailing wind condition. ③ The flux source area was much larger under stable stratification. The distance between the location of the maximum value of the flux footprint and the station was also found to be much larger under stable stratification.
Surface sediments were collected from five representative areas—the floodgate entrance, the north and south sides of the reclamation area, and the central and downstream sections—of Qingcaosha Reservoir; the pollution characteristics and potential ecological risk of seven heavy metals (Cu, Zn, Pb, Cr, Cd, As and Hg) in these sediments were subsequently investigated. Results showed that the heavy metal content in the surface sediments showed spatial variations: the content was relatively higher in the center of the reservoir and was low in the north and south sides of the reclamation area. Heavy metals in the surface sediments, in addition, were mainly in the residual fraction; the content of heavy metals in the exchangeable fraction was extremely low. A potential ecological risk assessment indicated that the comprehensive potential ecological risk index (ERI) of the investigated heavy metals ranged from 55 to 113. The maximum ERI value was observed around the floodgate of the reservoir entrance, and low ERI values were observed at the north and south sides of the reclamation area. The ERI was lower than the threshold for low ecological risk, indicating that heavy metals in the surface sediments of the Qingcaosha Reservoir have low potential ecological risk.
The biological filtering effect of reservoirs has become an area of focus for environmental science. We conducted an in situ survey, with different upstream retention times, of chlorophyll-a (Chl.a) and nutrients at the Zhexi, Zhelin, Hualiangting, and Yahekou reservoirs. We found that: ① In the vertical direction, Chl.a in each reservoir had the largest subsurface layer and generally decreased downward, resulting in upper nutrients assimilated by algae and an average vertical retention rate of DIN, DIP, and DSi of the reservoirs at 6.29%, 14.92%, and 8.60%, respectively. ② The concentration of Chl.a and the biomass of phytoplankton generally decreased from upstream to downstream, resulting in lots of nutrients assimilated by algae upstream, and the average horizontal retention rate of DIN, DIP, and DSi of the reservoirs at 26.53%, 39.89%, and 31.70%, respectively. ③ The total average retention rate of DIN, DIP, and DSi of the four reservoirs were 32.82%, 54.80%, and 40.30%, respectively. ④ The concentration of DIP decreased gradually with increases in the reservoir’s retention time; in fact, the concentration of DIP even decreased to 0.1 μmol/L, i.e. the growth of phytoplankton was fully limited by DIP.
The concentrations and relative ratios of alkaline earth metals, such as Sr, Ba, and Ca, in sediments are widely used to discriminate marine and terrestrial environments in paleoenvironmental research. However, geochemical elements occur mostly in mineral crystal lattices (namely, the residual phase after acid extraction), which is not linked to the physical, biological, or chemical environments of the deposition processes. Hence, only selective extraction of phases can be used to interpret changes in the sedimentary environment. In this study, we collected surficial sediments from the present-day saltmarsh-tidal flat, alluvial plain, and tidal river (Yaojiang River) in Ningbo Plain and used a plasma spectrometer to measure the concentrations of Sr, Ba, and Ca in: the leachates extracted by diluted acetic acid (HAc) and diluted hydrochloric acid (HCl), the residues after acid extraction, and the bulk samples. The results showed that alkaline earth metals in the HAc-leachates were most sensitive to changes in the sedimentary environment, followed by the HCl-leachates. No variation in Sr/Ba (molar ratio) could be distinguished in the bulk samples of surficial sediments collected from different sedimentary settings. Furthermore, consistent results were obtained by using different sample amounts and measuring instruments when applying the HAc method. Significant variations in alkaline earth metals in the HAc-leachates were observed for the surficial sediments in this study. Ca and Sr showed the highest concentrations in the saltmarsh-tidal flat sediments and the lowest concentrations in the alluvial sediments; Ba concentration showed the opposite trend. We thus suggest that end-member analyses of the alkaline earth metals in HAc leachates can be used to effectively identify transgression/regression recorded in sedimentary stratigraphy in the coastal zone.
Laoyehai is a lagoon located on the east coast of Hainan and is impacted heavily by human activities (especially those related to aquaculture). Laoyehai is characterized by its eutrophic and hypoxic waters. During previous dry and flood seasons (specifically, April 2010 and August 2011), when hypoxia occurred, field work was conducted to observe the dissolved oxygen (DO) and to collect organic matter. Hypoxia was significant in the spring season with surface DO as low as 50%, while the bottom hypoxic water prevailed in both seasons. In the spring season, the C/N ratio of particulate organic matter was higher than that observed in the summer season (C/N in the spring: 9.7, C/N in the summer: 7.7). Organic matter composition indicated by amino acids showed that there was strong in situ production in the spring relative to that in the summer. Lower C/N values and higher carbon and nitrogen yields of amino acids (AA C yield, AA N yield) in the summer showed active in situ production, suggesting that organic matter was mainly derived from phytoplankton. This also explains the sufficient surface DO in the summer. The degradation of particulate organic matter increased with the decrease of dissolved oxygen, indicating that the particulate organic matter and its degradation were the key driving factors for oxygen consumption in the lagoon. Meanwhile, we found that the relationship between dissolved organic matter components and DO was not significant.
Urban greenspace is an important part of the urban green system and urban landscape, with important ecological, social, psychological, and economic functions. The evolutionary trajectories and change patterns of urban greenspaces are of great significance to the sustainable development of urban green space systems and the optimization of the urban ecological network. There is ongoing emphasis on urban greenspace research; most previous research studies on greenspace changes have used landscape indices and spatial analysis methods, which struggle to accurately reflect the change process, change types, and spatial distribution patterns of greenspace. In our paper, seven types of greenspace evolution were defined, including continuous, expansion, contraction, dissipation, creation, merging, and splitting. Then, an evolution graph was constructed by defining greenspace patches as nodes and greenspace evolution relations as edges. Based on the greenspace evolution graph, the greenspace evolution process and its corresponding evolutionary trajectory were further extracted and visualized. Taking the Shanghai city center as a case study area, the spatial distribution pattern and change process of the urban greenspace for 2008, 2012, and 2016 were extracted. Results indicated that the most dominant greenspace evolution types were creation and dissipation. The newly added urban greenspace patches were more evenly distributed compared with those patches involved with an evolution type of dissipation. Small patches were more likely to be located in the urban center, while large patches tended to be concentrated in rural areas. The location of greenspace patches that disappeared were mostly concentrated in non-central areas, particularly in the Pudong New Area. Compared with the urban greenspace changes between 2008 and 2012, locations where new greenspace appeared between 2012 and 2016 were more evenly distributed, while locations where greenspace disappeared were more concentrated.
Using a combination of 0.25 m resolution aerial remote sensing data and topographic maps, the land use data for the Shanghai Huangpu River Water Source Protection Area in 2000, 2005, 2010, and 2015 were evaluated by means of manual visual interpretation. With the growth of industrial land from 2000 to 2015, there has been a relative decline in the proportion of agricultural land and water areas and a relative increase in the proportion of urban land areas. In the past 15 years, the areas used for farming and cultivating livestock and poultry have decreased by 44.17% and 71.65%, respectively. The area of water reduction has decreased by 6.44%. The area of green land forests, in contrast, has increased by 645.94%. The area of urban land has increased by 53.53%. All types of urban land use have increased, with the area used for industrial storage increasing the most at 21.77%. From the perspective of land transfer, the area of cultivated land transferred outward was the largest in the past 15 years, accounting for 22,839.96 hectares, and the transfer-outward rate of livestock and poultry farming land was the highest at 91.23%. The area of green land forests transferred inward was the largest at 16190.32 hectares; the transfer of industrial storage land inward was the second largest at 7979.12 hectares. Based on an analysis of development in the region, population changes, policy impacts, and other factors, our results indicate that urbanization and industrialization drove the increases in urban land areas; moreover, environmental policies affected green land areas as well as livestock and poultry farming land areas, market regulations affected aquaculture land areas, and environmental policies and urbanization affected water source protection areas.
With the ongoing urbanization process in Hangzhou, we investigated the species composition and structure of ruderal communities across eight urban habitat types. Habitat factors such as light intensity, soil pH, soil electrical conductivity, soil compaction, soil total nitrogen, soil total phosphorus, soil organic matter, and interference types were measured; we subsequently analyzed the relationship between species composition and habitat factors of the ruderal communities. The results indicated that forest gap and lawn were the most common habitat types, and these community types covered 20.1% and 16.3%, respectively, of the total 1665 sampling plots surveyed. In all seven habitats except tree pool, moreover, dwarf-growth annual ruderals were the dominant species within the community. There were 30 ruderal species distributed across eight habitats. Environmental factors varied across the different habitats. The light intensity was the lowest in the forest gap, the soil conductivity value was the highest in the shrub-grassland gap, and the light intensity and soil compactness were the highest in soil abandoned land.
In this study, we investigated the plant species composition of three types of abandoned farmland and compared with cultivated farmland in the Chongqing suburban area. We analyzed dynamic changes in species composition and community type as well as trends in plant diversity. The results showed that a total of 99 species, 90 genera, and 39 families were recorded in the spring and autumn. At the second level of the TWINSPAN classification, the cultivated farmland, early abandoned farmland, and late abandoned farmland could all be distinguished. As the number of years since abandonment of the farmland increased, the dominant life form of the plant community gradually transitioned from annual to perennial, and woody plants began to become the dominant species. Plant diversity gradually increased from the early to middle stage of abandonment, but declined during the later stage.
In this study, we measured the branch xylem structure of 85 woody plant species at the Shanghai Chenshan Botanical Garden to compare vessel characteristics among different life forms and check their phylogenetic signals. The trade-off between vessel density and vessel size was subsequently compared among different life forms. The results showed that: ① The vessel diameter ((28.55 ± 8.84) μm) and vessel ratio (8.7% ± 2.89%) of evergreen woody plants were significantly smaller than the vessel diameter ((35.81 ± 13.92) μm) and vessel ratio (12.7% ± 4.82%) of deciduous woody plants; meanwhile, there was no significant difference observed in the vessel density between evergreen plants ((149.3 ± 75.62) N/mm2) and deciduous plants ((164.5 ± 154.28) N/mm2). The vessel diameter of trees ((35.86 ± 13.5) μm) was significantly larger than that of shrubs ((26.24 ± 8.84) μm), but there was no significant difference observed in the vessel ratio and vessel density between trees (12.09% ± 5.01%; (151.9 ± 142.73) N/mm2) and shrubs (10.59% ± 2.99%; (208.7 ± 126.37) N/mm2). ② There were significant phylogenetic signals observed in vessel diameter and vessel density, and the signal of vessel density was larger than that of vessel diameter. There was, however, no obvious phylogenetic signal in the vessel ratio. ③ The standardized major axis test indicated that the trade-off between vessel density and vessel size existed in all life forms, with a common slope coefficient of –0.89 and a 95% confidence interval (–0.98 ~ –0.79). However, the intercept of evergreen trees was significantly smaller than that of deciduous trees, suggesting that deciduous trees have a larger vessel diameter than evergreen trees for a given vessel density.
In this study, we evaluated the variation patterns of air anions in nine plant communities with different structures in Zhongshan Park of the central city of Shanghai; the air anion concentration was monitored continuously over the course of a year. In addition, we analyzed the influence of different factors—community structure, canopy density, and the level of surrounding water—on air anion concentration. The results showed that the air anion concentration within different community types was mostly between 200 cm3 and 700 cm3, and the daily variation showed a single peak. Air anion concentration remained at a high level but fluctuated significantly from July to October. The relationship between community structure and air anion concentration was roughly as follows: herbage > arbor with shrubs ≈ arbor with herbage > arbor with shrubs and herbage; in general, the more complex the community structure, the less the air anion variability. There was a negative correlation between the mean variation of the air anion concentration and the canopy density, implying that higher canopy density values were associated with lower mean variation of the air anion concentration throughout the community. This negative correlation became more significant in the daytime, between 7:00 to 19:00, when photosynthesis was ongoing. In addition, the impact of static water on the anion concentration was not found to be significant. The conclusion of this paper can provide basic data and a scientific basis for the construction of healthy plant communities in urban parks.
This study investigated the distribution characteristics and influencing factors of crabs and crab burrows in Fengxian coastal wetland to reveal the main influencing factors in crabs’ and crab burrows’ distribution and deepen the understanding of crabs’ living habits. The results showed that: ① The abundance of Helice tientsinesis in the high-marsh Phragmites australis habitat is higher than that in the middle-marsh Phragmites australis -Spartina alterniflora mixed habitat and low-marsh Spartina alterniflora habitat (p < 0.01). However, there is no significant difference in the abundance of Sesarma plicate between habitats (p > 0.05). ② The density of crab burrows in the high-marsh Phragmites habitat is significantly higher than that in the middle-marsh Phragmites-Spartina mixed habitat and the low-marsh Spartina habitat (p < 0.05), while the average opening diameter of crab burrows is significantly lower than that in the middle-marsh Phragmites -Spartina mixed habitat and the low-marsh Spartina habitat (p < 0.05). ③ There is no significant linear relationship between crab abundance and the density of crab burrows( p > 0.05), while there is a significant positive correlation between the density of crab burrows and the abundance of Helice tientsinesis ( p < 0.01). ④ Crab abundance is negatively correlated with plant underground biomass ( p < 0.01). ⑤ There is a negative correlation between the density of crab burrows and vegetation coverage and plant density. The relative elevation, water content, conductivity, total organic carbon content, and total nitrogen content are positively correlated with the density of crab burrows. Among these factors, the relative elevation is the habitat factor with the highest correlation with the density of crab burrows.
Let ${\mathfrak{g}}$ be the Witt algebra over an algebraically closed field of characteristic $p>3$ , and $r\in\mathbb{Z}_{\geqslant 2}$ . The commuting variety ${{\cal{C}}_{r}}\left( \mathfrak{g} \right)$ of $r$ -tuples over ${\mathfrak{g}}$ is defined as the collection of all $r$ -tuples of pairwise commuting elements in ${\mathfrak{g}}$ . In contrast with Ngo’s work in 2014, for the case of classical Lie algebras, we show that the variety ${{\cal{C}}_{r}}\left( \mathfrak{g} \right)$ is reducible, and there are a total of $\frac{p-1}{2}$ irreducible components. Moreover, the variety $ {{\cal{C}}_{r}}\left( \mathfrak{g} \right) $ is not equidimensional. All irreducible components and their dimensions are precisely determined. In particular, the variety ${{\cal{C}}_{r}}\left( \mathfrak{g} \right)$ is neither normal nor Cohen-Macaulay. These results are different from those for the case of classical Lie algebra, $\mathfrak{sl}_2$ .
For the infinite dimensional simple 3-Lie algebra $A_{\omega}^{\delta}$ over a field $\mathbb F$ of characteristic zero, we construct two infinite dimensional intermediate series modules $(V, \rho_{\lambda, 0})=T_{\lambda, 0}$ and $(V, \rho_{\lambda, 1})=T_{\lambda, 1}$ of $A_{\omega}^{\delta}$ as well as a class of infinite dimensional modules $(V, \psi_{\lambda,\mu})$ of ad $(A_{\omega}^{\delta})$ , where $\lambda, \mu\in \mathbb F$ . The relation between 3-Lie algebra $A_{\omega}^{\delta}$ -modules and induced modules of ad $(A_{\omega}^{\delta})$ are discussed. It is shown that only two of infinite dimensional modules, namely $(V, \psi_{\lambda, 1})$ and $(V, \psi_{\lambda, 0})$ , are induced modules.
Let $ G $ be a connected reductive algebraic group over an algebraically closed field $ k $ of prime characteristic $ p $ , and let $ {\frak {g}} = {\rm{Lie}}(G) $ , $U_{\chi}({\frak {g}}) $ be the reduced enveloping algebra. In this paper, when $ p $ -character $ \chi $ has the standard Levi form, we prove that a $ U_{\chi}({\frak {g}}) $ -module $ Q $ is a tilting module if and only if it is projective.
In this paper, we establish a Kastler-Kalau-Walze type theorem for an even dimensional manifold with boundary about Dirac operators with torsion; in addition, we provide a simple theoretical explanation to the Einstein-Hilbert action for any even dimensional manifold with boundary.
In this paper, we study the primitive equations of the atmosphere in the presence of vapor saturation; these equations are often used in forecasting weather in a cylindrical region. By using the technique of differential inequality and the method of energy estimation, we obtain the prior bounds of the solutions for the equations, and we prove the continuous dependence of the equations on the boundary parameters.
Considering the prevalence of variations in virus strains and the age of infection, a vector-borne infectious disease model with latent age and horizontal transmission is proposed. An exact expression for the basic reproduction number, ${\cal R} _0 $ , is given, which characterizes the existence of the disease-free equilibrium and the endemic equilibrium for this model. Next, by using a combination of linear approximation methods, constructing suitable Lyapunov functions, LaSalle invariance principles, and other methods, we prove that if ${\cal R}_0 <1 $ , then the disease-free equilibrium has global asymptotic stability, and the disease will eventually become extinct; if ${\cal R}_0>1$ , then the endemic equilibrium is globally asymptotically stable, and the disease will continue to form an endemic disease.
A number of algebraic methods used for constructing exact finite series solutions of nonlinear evolution equations are based on the homogeneous balance principle, such as the tanh function method, the Jacobi elliptic function method, the Painlevé truncated expansion method, the CRE method, etc. In each of these methods, the order of required solutions is determined by the homogeneous balance principle. In this paper, the homogeneous balance principle is further extended by considering additional balance possibilities. An n-order expansion method is proposed to determine possible new orders of required solutions. By applying the proposed method to several examples, we show that higher orders and new solutions can be obtained.
The use of a gene sequencer requires that the lens and gene chip are aligned accurately before base-calling. We propose an algorithm to calculate the deviation of the field of view (FOV) from the ideal position. Marks are set at locations on the gene chip in advance, so that the deviation in position of the lens relative to the gene chip can be analyzed. Firstly, the marked locations are captured by extracting grayscale features of the image to initially align the center of the FOV; secondly, the coordinates for multiple key points on the marks are captured; and finally, the location and angle deviations are calculated by mapping coordinates for the key points. Practical and experimental analysis show that the image registration algorithm designed in this paper can achieve a high-precision estimate for the position deviation between the FOV and the gene chip.
Internet of things (IoT) technology is booming, and from the viewpoint of embedded experimental teaching, it is worth thinking about how to lead students to conduct preliminary exploration in the field of Internet of things with embedded technology. In order to reduce the difficulties and costs of student learning and development, we designed and implemented a scalable IoT teaching and development system that integrates management and development functions. We adopted the open-source framework of Spring Boot and Vue to complete the development for the cloud side of the system. Using a micro-service architecture, we solved the typical challenges of strong coupling and poor scalability. In order to cope with high concurrency scenarios, we designed a load balancing optimization algorithm based on threshold filtering. Experimental results showed that the algorithm enhances the average response speed of the cloud and improves load balancing in complex network environments. Based on the MSP432 development platform and the EMW3080 Wi-Fi module, we implemented an easy-to-use SDK(Software Development Kit) that supports network distribution and data communication on the hardware side. The SDK reduced the complexity of the underlying hardware work and learning costs, allowing developers to focus on implementing business logic. Combining the work of software and hardware, the proposed IoT teaching development system provides a complete set of personalized IoT development solutions and an embedded teaching management system.
This paper presents an end-to-end method for Chinese text relation extraction based on a multi-channel CNN (convolutional neural network). Each channel is stacked with a layered neural network; these channels do not interact during recurrent propagation, which enables a neural network to learn different representations. Considering the nuances of the Chinese language, we employed the attention mechanism to extract the semantic features of a sentence, and then integrate structural information using piecewise average pooling. After the maximum pooling layer, the final representation of the sentence is obtained and a relational score is calculated. Finally, the ranking-loss function is used to replace the cross-entropy function for training. The experimental results show that the MCNN_Att_RL (Multi CNN_Att_RL) model proposed in this paper can effectively improve the precision, recall, and F1 value of entity relation extraction.
Multilayer networks can better reflect the structure and characteristics of many systems in the real world. In recent years, multilayer networks have become a focus area for many researchers. Based on the degree-degree correlation of interlayer nodes, we propose an intermediate degree coupling pattern to enhance the traffic capacity of multilayer networks at a low relative cost. In addition, the effectiveness of the intermediate degree coupling pattern is verified using two classic routing strategies, namely shortest path and efficient routing. Compared with the three coupling methods-assortative coupling, disassortative coupling, and random coupling-the intermediate coupling pattern makes the traffic load distribution more uniform on multilayer networks; hence, the traffic capacity of multilayer networks is greatly improved, and the average transport time of packets is effectively reduced. With lower coupling probability, the intermediate coupling pattern can significantly enhance the traffic capacity of a multilayer network when an efficient routing strategy is used. Meanwhile, simulation results show that more uniform network topology results in higher traffic capacity.
In this paper, we studied a new preparation technique for lyophilized lentiviral vectors. We determined the optimal formulation for a freeze-drying protective agent by screening and optimizing potential candidates. The candidates were evaluated on the basis of physical and chemical properties of the freeze-drying process, including appearance, excipient, color, and solubility. The optimal formulation was determined to be trehalose 0.30 g/mL, L-histidine 0.31 mg/mL, L- alanine 0.178 mg/mL, CaCl2 0.020 mg/mL, and MgSO4 0.015 mg/mL. With this technique, the prepared lyophilized lentiviral vector had good appearance, low residual water content, intact structure, and good re-dispersibility. The biological titer of the lentiviral vector reached 9.37 × 107 IU/mL, and the recovery rate of the titer was 50.15%. We also conducted research on potential influencing factors, including a high temperature accelerated experiment and repeated freeze-thaw stability experiments. These experiments showed that the lyophilizing technology can be used for the preparation of lentiviral vector solids and can be effectively used to improve the storage of lentiviral vectors under different temperature conditions, exposure to repeated freeze-thaw cycles, and tolerance to adverse environments (e.g., high temperatures).
In this study, ground-level air pollutants (i.e., particulate matter (PM), NO2, and CO) were monitored at two transects of an urban-road-green-belt of Shanghai for one month during the summer season. Four monitoring sites at 100m intervals were set along each transect from the road to the inside. The air pollution was evaluated for each site based on China’s national standard, and the variation in air pollution purification ability (i.e., removal percentage) was compared among sites with different distances to the road. The effects of meteorological condition and pollution background on maximum removal percentage of each air pollutant were evaluated by multiple regression analysis. The results showed that the green belt greatly contributed to reducing PM2.5, PM10, and NO2; however, the green belt also produced a cumulative effect on CO generation within its boundaries. The green belt had the greatest air pollution purification performance at sites with 300m distance to the road for most pollutants in both transects. The maximum removal percentages of PM2.5 and PM10 were correlated to air humidity difference and air temperature difference between outer and inner of forests mostly, while the maximum removal percentages of NO2 were correlated to the pollution background and maximum removal percentages of CO were correlated to air temperature difference. The results can provide a theoretical foundation for forest transform and arrangement aimed at air pollution purification in the green belt.
Extreme precipitation and floods may occur during a storm surge hazard, accompanied by typhoon conditions and high tide levels. The combination of these factors intensifies the risk of flooding in coastal regions suffering from a storm surge. Thus, multi-impact analysis should be applied to determine flood risk during a storm surge. River networks play an important role in flood processes. The storage and transportation capacity provided by rivers can directly change the distribution of a flood. In this paper, a 1-D river network model and a 2-D surface model were respectively established and coupled to simulate the flood processes during an assumed storm surge in Jinshan District, Shanghai. The cumulative influence of the concurrent storm surge, typhoon, rainfall, and upstream flooding was explored to support hazard risk analysis for Jinshan District. The coupled model’s simulation indicated a clear decrease in the number of waterlogged areas in Jinshan District after considering the river network’s storage and transportation capacity during a storm surge event. The distribution of predicted waterlogged areas also changed; according to the simulation results, the flood risk grade decreased in the central and northern Jinshan District and rose in the Northwest corner.
In order to improve the low specific surface area of g-C3N4, three-dimensional (3D) porous g-C3N4 was prepared using high temperature thermal polymerization. Fe2O3/g-C3N4 catalyst was prepared by compositing the g-C3N4 with Fe2O3 to improve its visible light response. The decolorization rate of the Fe2O3/g-C3N4 catalyst reached 100% in 30 minutes with a g-C3N4 content of 900 mg, Rhodamine B (RhB) concentration of 20 mg·L–1, and H2O2 content of 15 mmol. The Fe2O3/g-C3N4 catalyst also demonstrated good performance in degrading other organics; the degradation rates of Methyl orange (MO) and Tetracycline (TC) reached 80% and 90%, respectively, in 30 minutes. This photocatalytic mechanism was explored by active group capture experiments, and the results show that h+ and ·OH play an important role in the progress of photocatalysis.
To improve the environmental tolerance and nitrogen removal efficiency of an aerobic denitrifier, polyvinyl alcohol (PVA), sodium alginate (SA), and rice hull powder were used as immobilized carriers for an aerobic denitrifier and the performance was subsequently evaluated. The results showed that the optimal ratio of immobilized particles was a mixture of 12% PVA, 8% sodium alginate (SA), 0.5 g rice hull powder, and 10 mL bacterial solution. The immobilized particles had strong stability and mass transfer capability; the removal efficiency of TN was 89.35% ~ 90.12% over 48h. The immobilized particles had good tolerance to pH and rotating speed. When the pH was 11, the removal efficiency of TN was 90%. The removal efficiency of TN and NH4+-N was the highest (91.29% and 93.30%, respectively) when the speed was 120 r/min. The immobilized particles were not resistant to low temperatures (10℃ and 15℃), and the TN removal efficiency was only about 20% at 10℃. The TN removal efficiency, however, achieved 90.59% at 30℃.
In this study, the rank evaluation method was used to comprehensively assess engineering applications for integrated multi-pond constructed wetlands (MPCWs) using a multi-dimensional evaluation system. We used pollutant purification performance, sewage storage capacity, vegetation ecological restoration, and economic investment as indicators for the evaluation. The results showed that the application of large-scale integrated MPCWs for controlling non-point source pollution was helpful for intercepting pollutants. Accumulated and purified reclaimed water was available for nearby rural agricultural water use. The implementation of MPCWs can result in water savings, pollution reduction, water resource allocation, and sewage reuse. The inclusion of vegetation within MPCWs was beneficial for ecological vegetation restoration and sewage purification. Given the economic investment requirement for MPCWs and the high potential security risks of deep-water MPCWs, we proposed application suggestions for different groups of MPCWs based on functional requirements. Shallow free water surface flow constructed wetlands could be used in populous areas with small volumes of highly polluted water, and eco-floating treatment wetlands could be used in sparsely populated areas with large volumes of highly polluted water. The scientific application of different groups of MPCWs also requires consideration of other factors, such as local special land resource endowments, pollution source structures, and the allocation of rural agricultural water resources.
In this paper, we propose the concept of “LID (low impact development) Index” and “LID Runoff Reduction Efficiency” based on an analysis of runoff cutting efficiency for different LID technical measures. A map was designed to help quickly select the appropriate LID facility and its proportions according to the pollution reduction target in a built-up area. It shows that when the‘LID index’ increases, surface runoff and pollutants exhibit a similar exponential function form; the larger the LID index, the lower the “LID runoff reduction efficiency”. The model data is easy to obtain and flexible, rendering potential applications worthy of exploration.
Since the eleventh five-year plan, the National Major Science and Technology Program for Water Pollution Control and Treatment (referred to as the “Water Program”) has developed more than 20 key technologies to assist in restoring the lakeshore zone of Taihu Lake Basin. These solutions overcome the application limitations of a single technology in ecological restoration of the lakeshore zone. This includes technologies for: rebuilding the upwind bank slope to eliminate wave and algae in the ecological restoration area; rapid settlement of sediment for lasting improvements in water quality; multi-level reconstruction technology for aquatic vegetation in an open water area; large-scale cultivation and community construction for optimal allocation and stabilization of aquatic plants; and utilization of aquatic vegetation resources for long-term operation and management, based on the technical requirements for improving soil stability, improving the wetland habitat, and restoring the aquatic vegetation in the restoration area. Hence, a comprehensive technology solution for ecological restoration of different lakeshore zones in Taihu Lake Basin (titled “investigation and assessment of lakeshore zone status, wetland habitat improvement, wetland aquatic vegetation restoration, and long term management”) was formed. The complete technology solution for vegetation restoration in the dike-type lakeshore zone has been successfully applied in Zhushan Bay of Taihu Lake, with the wind wave reduced by 64% and the vegetation coverage rate exceeding 30%. The complete technology solution for vegetation restoration in a gentle slope lakeshore zone was also successfully applied in Gonghu Bay of Taihu Lake; the implementation resulted in coverage of aquatic plants reaching 57%, water depth transparency of more than 110 cm, and a greatly improved biodiversity index. In summary, the research results provide a practical basis for aquatic vegetation restoration and water quality improvement.
In this paper, we provide an overview of the development of emission permit systems domestically and globally, and analyze the problems and technology requirements for an emission permit management system at the initial stage of the National Major Science and Technology Program for Water Pollution Control and Treatment (referred to hereinafter as the “Water Program”) in the Taihu Basin. Based on a summary of technical achievements from the 11th and 12th Five-Year Plans for the Taihu Basin Water Program, a comprehensive set of industrial point source emission permit management technology methods was developed for unit division, control unit pollution load verification, control unit water environmental capacity calculation, assessment of water pollution control and management for key industries, allocation of emission permits, and dynamic monitoring. Furthermore, the effects of implementing a complete set of technologies in Taihu Lake Basin were explored and will serve as a reference for the implementation of a pollution permit management system.
Understanding the impact of dissolved organic matter (DOM) on the denitrification process is critical to addressing the challenges associated with nitrogen removal in urban river treatment. In this paper, we show that DOM in urban rivers are mainly comprised of small-molecule fulvic acids. The humic acid content and aromaticity of the DOM, moreover, were found to be low. Compared with the control case, DOM can promote the denitrification process; specifically, the removal efficiency of TN and NO3–-N in the DOM-added group increased by 7.24% ± 0.36% and 23.52% ± 1.17%, respectively. DOM with an acetate group had an even better effect on the removal of TN and NO3–-N, reaching 74.48% ± 1.29% and 98.62% ± 0.07%, respectively. Microbiological analysis showed that the DOM-added group can significantly increase the diversity and richness of the bacteria community compared with the control case. However, the relative abundance of the heterotrophic denitrifiers Pseudomonas and Brevundimonas as well as the nirK-type denitrifier Paracoccus in the DOM-added group was less than that of the DOM with an acetate group. Additionally, a relatively high concentration of NH4+-N (> 3.7 mg/L) was observed in the DOM-added group. The addition of DOM can significantly increase the relative abundance of Anaeromyxobacter related to dissimilatory nitrate reduction to ammonium (DNRA) functional genes. It is speculated that DOM promotes the denitrification process and induces the DNRA process simultaneously.
This paper provides an overview of the technical achievements in A2/O upgrading during the 11th and 12th Five-Year Plans as well as the current successful operation of the improved A2/O process. We summarize the measures used for upgrading the A2/O process of municipal wastewater treatment plants under high discharge standards with respect to in-situ optimization and advanced treatment. Finally, we review the operating state of representative A2/O upgrade demonstration projects and offer suggestions for optimization and promotion of the A2/O process.
In this paper, we use hydrology and water quality survey data around Dalian Lake to assess the environmental status of the water and surface runoff pollution in the Dalian Lake demonstration area. The results show that the water quality of the Jinze water source can largely satisfy class Ⅲ standards for surface water; however, given seasonal differences for some indicators, the water quality of the Jinze water source fails to meet the standard on a consistent basis. The water body surrounding the Dalian Lake demonstration area is predominantly slow flow (flow rate: 0 - 0.03 m/s), with low transparency and neutral or slightly alkaline water (pH = 6.63 - 9.67); these conditions render the area susceptible to forming water bloom. Pollution from nitrogen and phosphorus nutrients at each sampling point was significant, and seasonal differences were noticeable; the water quality in spring and summer is generally better (class Ⅱ—Ⅲ), and some water bodies meet class Ⅴ standards in autumn and winter. The concentration of rainwater runoff in the Dalian Lake demonstration area has a noticeable initial effect. The average concentration of nitrogen and phosphorus in the underlying water is higher than the class Ⅴ standard for surface water, and the pollution is likewise more significant.
In this paper, we consider the rainwater runoff prevention and control technology demonstration area of the Jinze water source area in Qingpu District, Shanghai - Dalian Lake; the research area is a national major water project from the “13th Five-Year Plan”. Our study includes systematic research analysis on the type and slope of the riparian zone, the nature of the riparian soil, and the species of indigenous plants in the demonstration area; the study provides essential data to support subsequent research on the use of experimental rainwater gradient control technology in the riparian zone. The analysis shows that the riparian zone in the demonstration area is comprised of near-natural and rigid riparian, with gentle slopes. The aquatic and terrestrial plants in the zone with the largest population include lotus, reed, and herbaceous plants, respectively. Among the sampling sites in the study area, the average total nitrogen content of the soil in the adjacent farmland fluctuated around 0.95 g/kg, while the soil near the inlet gate was measured at 0.42 g/kg. The total phosphorus content of the soil in the adjacent residential living area, fish pond culture, and farmland area was more than 1.58 g/kg, while the soil at the lakeshore berm was measured at 1.10 g/kg. The average organic matter content was 11.30 g/kg, with higher values recorded in the densely planted area. These results confirm that local fishpond farming and agriculture have contributed to pollution of the soil environment.
In this paper, we study land use change and its effects on water quality for 30 water bodies in the green-belt area of Shanghai; the analysis is based on the Markov transfer matrix and Pearson correlation analysis of field data and interpreted land use types. The results show that: the water quality is dominated by Grade Ⅳ and lower Grade Ⅴ; the proportion of water bodies with lower Grade Ⅴ is decreasing year by year; the buffer zone is dominated by construction land, forest, and grassland, with a total proportion of about 84.37%; the increase in construction land and decrease in bare land, accounted for 48.95% of the total reduced area and 50.85% of the total increased area, respectively; on the 300 m buffer scale, grassland had a positive effect on DO and Chla; on the 500 m scale, bare land was the main factor for CODMn deterioration; and cultivated land was positively correlated with multiple pollution indicators at two scales.
This review summarizes the latest research progress on the inhibition mechanism of different nanoparticles on algal bloom. We systematically analyze the influence of environmental factors on migration and transformation of nutrients and the cytotoxicity process regulated by nanoparticles. The future prospects for the immobilization of nanoparticles are explored, and the paper proposes ideas to realize the functional performance of nanomaterials while controlling environmental risks. This research sheds light on new strategies for the inhibition of algal bloom.
The Beisan River Basin is an important water source for the Jing-Jin-Ji region. It is important to analyze the temporal and spatial changes in basin water yield and the corresponding driving factors to maintain the security and stability of the ecosystem. Based on meteorology, land use, and soil data, the water production module of the InVEST model was used to analyze the temporal and spatial change characteristics of water yield in the Beisan River Basin from 2000 to 2017. The contribution of climate and land use change to the change in water yield was explored through scenario simulation. The results showed that from 2000 to 2017, the average annual water yield of the Beisan River Basin was 17.8 × 108 m3; the annual change showed an increasing trend at a rate of 1.03 × 108 m3/a. The spatial distribution pattern of water yield was high in the south and low in the north. The average depth of water production in the south and north was 70.85 mm and 8.83 mm, respectively. The high value area of water yield was transferred from the southeast Juhe River and Huanxiang River Basin to the southwest Wenhe River and Yongdingbei River Basin. The water supply per unit area, ranked from high to low, across different land use types showed the following order: construction land > cultivated land > water area > unused land > forest land > grassland. From 2000 to 2015, the water yield of cultivated land was the highest, accounting for 51.3% of the total water yield of the basin, while that of construction land increased the most, reaching 144.3%. Scenario simulation results showed that climate and land use change contributed 70.7% and 29.3%, respectively, to the water yield increase, and the surge in precipitation played a leading role.
In this paper, soil samples were collected from the red soil region of southern China (namely, the Sunjiaba small watershed in Yingtan, Jiangxi) across four different land-use types. Laboratory incubation experiments were subsequently carried out from June 2019 to October 2019. We used a closed chamber to measure soil greenhouse gases (CO2, CH4, N2O) simultaneously with the help of an advanced greenhouse gas analyzer (Picarro-G2508). The aim was to explore the response of soil greenhouse gas emissions across different land-use types to changes in temperature and soil moisture levels under the premise of global climate change. The results showed that the global warming potential (GWP) of the four land-use types increases with paddy, orangery, forest, and upland, respectively. This suggests that greenhouse gas emissions from paddy soils have the greatest relative impact on global warming. In a temperature-controlled experiment, soil CO2 emissions were shown to have a significant positive correlation with soil temperature. The Q10 values of soil respiration coefficients for the four land-use types were: 2.61 (forest), 2.51 (upland), 3.12 (orangery), and 3.17 (paddy). Thus, paddy soil respiration has the highest temperature sensitivity, indicating that paddy soil has a higher CO2 emission potential. Correlations were not significant between CH4 and N2O emissions to soil temperature. In the moisture-controlled experiment, the results indicated that soil CO2 emissions increased at the beginning and then decreased with increasing soil moisture, with the maximum emission rate at 20% GWC (gravity water content). CH4 emissions from paddy soils increased with soil moisture (R2 = 0.8875); CH4 fluxes from the other three land-use types, however, were not significantly related to soil moisture. The soil N2O emissions increased at the beginning and then decreased across the soil moisture range measured; all land-use types had the highest N2O fluxes at 25% GWC.
Residual water level is an important factor affecting water depth; the water level depends primarily on river discharge, tidal conditions, and wind stress, and it can change significantly with time and space. Studying the temporal and spatial variations in residual water levels—and the respective influencing factors—is of great scientific significance and can be applied to estuarine water level prediction, water resources utilization, seawall design, flood protection, and navigation. In this paper, we used a validated three-dimensional numerical model of the estuary and coast to: simulate the temporal and spatial variations in the residual water levels of the Changjiang Estuary; analyze the impacts of river discharge, tidal conditions, and wind stress on residual water levels; and determine the dynamic mechanisms for its change. The spatial and temporal variations in residual water levels of the Changjiang Estuary is driven primarily by the fact that upstream residual water levels are higher than downstream levels because of runoff force. The highest residual water level appears in September, reaches 0.861, 0.754, 0.629, 0.554, and 0.298 m at Xuliujing, Chongxi, Nanmen, Baozhen, and the easternmost section of the northern dike of the Deepwater Navigation Channel, respectively. The lowest residual water level appears in: January for Xuliujing (0.420 m) and Chongxi (0.391 m), February for Nanmen (0.313 m) and Baozhen (0.291 m), and April for the easternmost section of the northern dike of the Deepwater Navigation Channel (0.111 m). The residual water level in the North Branch is lower than the level in the South Branch, because a small amount of river water flows into the North Branch. The residual water level is higher in the South Channel than the one in the North Channel. Within the South Channel itself, furthermore, the water level is higher on the south side than the north due to the Coriolis force, which makes the water turn to the right. By using numerical experiments to compare the impact of different factors, we found that runoff has the largest impact on residual water levels, tidal conditions have the second largest impact, and wind has minimal impact. The monthly mean river discharge is largest in July, which should lead to the highest residual water level, but southeasterly winds prevail in the same period leading to small residual water levels. The river discharge in September remains high and northerly winds prevail, driving the Ekman water transport landward and resulting in a residual water level rise in the estuary. The interaction between the river discharge and the northeasterly wind makes the residual water level highest in September rather than in July. In conclusion, this study revealed the dynamic mechanism explaining the highest residual water level observed in September.
Using the wave-shaped features of remote sensing images, the wavelength of ocean waves can be determined based on the wavelet method. Shallow water depths can then be estimated from the wavelength because the wavelength becomes shorter as the water depth decreases. In this paper, remote sensing data were replaced by ideal elevation data, and numerical simulation data were used to study the performance of the Complex Morlet Wavelet method in estimating wavelength and water depth. In particular, the effects of data resolution and sub-image size on water depth estimation were explored. The results from the ideal elevation data shows that: when the wavelength has no spatial change and the size of the sub-image is greater than the wavelength, the data resolution has no substantial effect on the wavelength estimation if there are more than nine evenly distributed data grids in one image. This phenomenon can be explained by the wavelength-energy spectrum. When the wavelength changes spatially, accurate estimation of the wavelength requires that the sub-image size is larger than twice the wavelength and there are four data grids in one wavelength. The estimation of wavelength by numerical simulated data requires a similar size for sub-images and the data number. The error of water depth estimation increases slightly if the sub-image size is too large, and also increases slightly as the resolution of the data decreases.
As one of the basic components of natural language processing, named entity recognition (NER) has been an active area of research both domestically in China and abroad. With the rapid development of financial applications, Chinese NER has improved over time and been applied successfully throughout the financial industry. This paper provides a summary of the current state of research and future development trends for Chinese NER methods in the financial field. Firstly, the paper introduces concepts related to NER and the characteristics of Chinese NER in the financial field. Then, based on the development process, the paper provides an overview of detailed characteristics and typical models for dictionary and rule-based methods, statistical machine learning-based methods, and deep learning-based methods. Next, the paper summarizes public data collection tools, evaluation methods, and applications of Chinese NER in the financial industry. Finally, the paper explores current challenges and future development trends.
A named entity recognition task is as a task that involves extracting instances of a named entity from continuous natural language text. Named entity recognition plays an important role in information extraction and is closely related to other information extraction tasks. In recent years, deep learning methods have been widely used in named entity recognition tasks; the methods, in fact, have achieved a good performance level. The most common named entity recognition models use sequence tagging, which relies on the availability of a high quality annotation corpus. However, the annotation cost of sequence data is high; this leads to the use of small training sets and, in turn, seriously limits the final performance of named entity recognition models. To enlarge the size of training sets for named entity recognition without increasing the associated labor cost, this paper proposes a data augmentation method for named entity recognition based on EDA, distant supervision, and bootstrap. Using experiments on the FIND-2019 dataset, this paper illustrates that the proposed data augmentation techniques and combinations thereof can significantly improve the overall performance of named entity recognition models.
Extraction of entities and relationships from text data is used to construct and update domain knowledge graphs. In this paper, we propose a method to jointly extract entities and relations by incorporating the concept of active learning; the proposed method addresses problems related to the overlap of vertical domain data and the lack of labeled samples in financial technology domain text data using the traditional approach. First, we select informative samples incrementally as training data sets. Next, we transform the exercise of joint extraction of entities and relations into a sequence labeling problem by labelling the main entities. Finally, we fulfill the joint extraction using the improved BERT-BiGRU-CRF model for construction of a knowledge graph, and thus facilitate financial analysis, investment, and transaction operations based on domain knowledge, thereby reducing investment risks. Experimental results with finance text data shows the effectiveness of our proposed method and verifies that the method can be successfully used to construct financial knowledge graphs.
With the advent of the big data era, the financial industry has been generating increasing volumn of data, exerting pressure on database systems. LevelDB is a key-value database, developed by Google, based on the LSM-tree architecture. It offers fast writing and a small footprint, and is widely used in the financial industry. In this paper, we propose a design method for the L0layer, based on non-volatile memory and machine learning, with the aim of addressing the shortcomings of the LSM-tree architecture, including write pause, write amplification, and unfriendly reading. The proposed solution can slow down or even solve the aforementioned problems; the experimental results demonstrate that the design can achieve better read and write performance.
Blockchain system adopts full replication data storage mechanism, which retains a complete copy of the whole block chain for each node. The scalability of the system is poor. Due to the existence of Byzantine nodes in the blockchain system, the shard scheme used in the traditional distributed system cannot be directly applied in the blockchain system. In this paper, the storage consumption of each block is reduced from O(n) to O(1) by combining erasure code and Byzantine fault-tolerant algorithm, and the scalability of the system is enhanced. This paper proposes a method to partition block data, which can reduce the storage redundancy and affect the query efficiency less. A coding block storage method without network communication is proposed to reduce the system storage and communication overhead. In addition, a dynamic recoding method for entry and exit of blockchain nodes is proposed, which not only ensures the reliability of the system, but also reduces the system recoding overhead. Finally, the system is implemented on the open source blockchain system CITA, and through sufficient experiments, it is proved that the system has improved scalability, availability and storage efficiency.
As a decentralized distributed ledger, blockchain technology is widely used to share data between untrusted parties. Compared with traditional databases that have been refined over many years, blockchains cannot support rich queries, are limited to single query interfaces, and suffer from slow response. Simple organizational structures and discrete storage limits are the main barriers that limit the expression of transaction data. In order to make up for the shortcomings of existing blockchain technology and achieve efficient application development, users can build abstract models, encapsulate easy-to-use interfaces, and improve the efficiency of queries. We also propose a general data management middleware for blockchain, which has the following characteristics: ① Support for custom construction of data models and the flexibility to add new abstractions to transaction data; ② Provide multiple data access interfaces to support rich queries and use optimization methods such as synchronous caching mechanisms to improve query efficiency; ③ Design advance hash calculation and asynchronous batch processing strategies to optimize transaction latency and throughput. We integrated the proposed data management middleware with the open source blockchain CITA and verified its ease of use and efficiency through experiments.
Query processing, including optimization and execution, is one of the most critical functionalities of modern relational database management systems (DBMS). The complexity of query processing functionalities, however, leads to high testing costs. It hinders rapid iterations during the development process and can lead to severe errors when deployed in production environments. In this paper, we propose a tool to better serve the testing and evaluation of DBMS query processing functionalities; the tool uses a fuzzing approach to generate random data that is highly associated with primary keys and generates valid complex analytical queries. The tool constructs constrained optimization problems to efficiently compute the exact cardinalities of operators in queries and furnish the results. We launched small-scale testing of our method on different versions of TiDB and demonstrated that the tool can effectively detect bugs in different versions of TiDB.
In the era of big data, the single-write multi-read process with separate storage and computing architectures can no longer meet the demands for efficient reading and writing of massive datasets. Multiple computing nodes providing write services concurrently can also cause cache inconsistencies. Some studies have proposed a global ordered transaction log to detect conflicts and maintain data consistency for the whole system using broadcast and playback of the transaction log. However, this scheme has poor scalability because it maintains the global write log at each write node. To solve this problem, this paper proposes a partition-based concurrency control scheme, which reduces the transaction log maintained by each write node by partitioning, and effectively improves the system’s overall expansion ability.
Given challenges with poor query performance for databases using LSM-trees, the present research explores the use of index and cache technologies to improve the query performance of LSM-trees. First, the paper introduces the basic structure of an LSM-tree and analyzes the factors that affect query performance. Second, we analyze current query optimization technologies for LSM-trees, including index optimization technology and cache optimization technology. Third, we analyze how index and cache, in particular, can improve the query performance of databases using LSM-trees and summarize existing research in this area. Finally, we present possible avenues for further research.
Electricity theft results in significant losses in both electric energy and economic benefits for electric power enterprises. This paper proposes a method to detect electricity theft based on t-LeNet and time series classification. First, a user’s power consumption time series data is obtained, and down-sampling is used to generate a training set. A t-LeNet neural network can then be used to train and predict classification results for determining whether the user exhibits behavior reflective of electricity theft. Lastly, real user power consumption data from the state grid can be used to conduct experiments. The results show that compared with the time series classification method based on Time-CNN (Time Convolutional Neural Network) and MLP (Muti-Layer Perception), the proposed method offers improvements in the comprehensive evaluation index, accuracy rate, and recall rate index. Hence, the proposed method can successfully detect electricity theft.
With the increasing popularity of sensors, time-series data have attracted significant attention. Early time series classification (ETSC) aims to classify time-series data with the highest level of accuracy and smallest possible size. ETSC, in particular, plays a critical role in fintech. First, this paper summarizes the common classifiers for time-series data and reviews the current research progress on minimum prediction length-based, shapelet-based, and model-based ETSC frameworks. There are pivotal technologies, advantages, and disadvantages of the representative ETSC methods in separate frameworks. Next, we review public time-series datasets in fintech and commonly used performance evaluation criteria. Lastly, we explore future research directions pertinent to ETSC.
Traditional worker helmet wearing detection models commonly used at construction sites suffer from long processing times and high hardware requirements; the limited number of available training data sets for complex and changing environments, moreover, contributes to poor model robustness. In this paper, we propose a lightweight helmet wearing detection model—named YOLO-S—to address these challenges. First, for the case of unbalanced data set categories, a hybrid scene data augmentation method is used to balance the categories and improve the robustness of the model for complex construction environments; the original YOLOv5s backbone network is changed to MobileNetV2, which reduces the network computational complexity. Second, the model is compressed, and a scaling factor is introduced in the BN layer for sparse training. The importance of each channel is judged, redundant channels are pruned, and the volume of model inference calculations is further reduced; these changes help increase the overall model detection speed. Finally, YOLO-S is achieved by fine-tuning the auxiliary model for knowledge distillation. The experimental results show that the recall rate of YOLO-S is increased by 1.9% compared with YOLOv5s, the mAP of YOLO-S is increased by 1.4% compared with YOLOv5s, the model parameter is compressed to 1/3 of YOLOv5s, the model volume is compressed to 1/4 of YOLOv5s, FLOPs are compressed to 1/3 of YOLOv5s, the reasoning speed is faster than other models, and the portability is higher.
Accurate classification of power system customers can enable differentiated management and personalized services for customers. In order to address the challenges associated with accurate customer classification, this paper proposes a classification method based on an equilibrium optimizer and an extreme learning machine. In this method, an adaptive competition mechanism is proposed to balance the global exploration and local mining ability of an equilibrium optimizer, improving the performance of algorithms in finding optimal solutions. Thereafter, the proposed equilibrium optimizer is integrated with an extreme learning machine to classify the customers of a power system. Experiments on real data sets showed that the proposed algorithm integrated with an extreme learning machine offers more accurate performance for different classification indexes; hence, the proposed method can provide an effective technical means for power system customer management and service.
Knowledge graphs are an effective way to structurally represent and organize unstructured knowledgeare; in fact, these graphs are commonly used to support many intelligent applications. However, product-related knowledge is typically massive in scale, heterogeneous, and hierarchical; these characteristics present a challenge for traditional knowledge query processing methods based on relational and graph models. In this paper, we propose a solution to address these challenges by designing and implementing a product knowledge query processing method using CPU and GPU collaborative computing. Firstly, in order to leverage the full parallel computing capability of GPU, a product knowledge storage strategy based on a sparse matrix is proposed and optimized for the scale of the task. Secondly, based on the storage structure of the sparse matrix, a query conversion method is designed, which transforms the SPARQL query into a corresponding matrix calculation, and extends the join query algorithm to the GPU for acceleration. In order to verify the effectiveness of the proposed method, we conducted a series of experiments on an LUBM dataset and a semisynthetic dataset of products. The experimental results showed that the proposed method not only improves retrieval efficiency for large-scale product knowledge datasets compared with existing RDF query engines, but also achieves better retrieval performance on a general RDF standard dataset.
With the rapid proliferation of technology, the degree of informatization in the financial industry continues to increase. The integration of financial data with power marketing platforms, moreover, is accelerating the interaction between users and power marketing platform data (e.g., basic customer details, energy metering data, electricity fee recovery data). The increased interaction, however, leads to higher data transmission leakage which can result in incorrect formulation of power usage strategies and electricity prices. Therefore, to satisfy the security requirements for data interaction in power marketing systems and ensure economic benefits for the power company, we propose an Ordered Binary Decision Diagram (OBDD) based on Ciphertext Policy Attribute Based Encryption (CP-ABE). This multi-level access approach can reduce the autonomy of shared data authority control in the remote terminal unit and improve the efficiency of data access. In addition, based on security and performance analysis, the proposed access control scheme is both more efficient and more secure than other schemes.
In this paper, we propose a mathematical model to solve the multi-objective cargo allocation problem with greater stability and efficiency; the model for cargo allocation maximizes the total cargo weight, minimizes the total number of trips, minimizes the number of cargo loading and unloading points, and offers fast convergence based on the elitism genetic algorithm (FEGA). First, a hierarchical structure with the Pareto dominance relation and an elitism retention strategy were added on the basis of the genetic algorithm. This helped to improve the population diversity while accelerating the local search ability of the algorithm. Then, the random structure of the initial population was modified, and a double population strategy was designed. An adaptive operation was subsequently added to sequentially improve the global search ability of the algorithm and accelerate the convergence speed of the population. Based on the new algorithm, real cargo data were used to demonstrate the feasibility and optimization potential of the new method. The results show that compared with the traditional genetic algorithm, the proposed algorithm has a better optimization effect in solving the cargo allocation process with strong constraints and a large search space; the search performance and convergence, moreover, are also improved.
The International Mathematical Olympiad (IMO) is one of the most important and influential global youth intellectual competitions. However, there is little research on how to effectively organize the competition at the national level to help cultivate talent in mathematics, science, and technology. The Mathematical Olympiad originated from a competition to solve mathematical problems. Many outstanding mathematicians and scientists have been prior winners of the IMO and have reaped benefits subsequently to some extent. The Mathematical Olympiad helps to select and train gifted students in mathematics. China’s outstanding historical achievements in the IMO have attracted the attention of the world. Many of China’s students, who exhibited exceptional performance at the IMO, later became outstanding mathematicians, scientists, and technologists. These achievements need to be publicized, and the Chinese experience at the Mathematical Olympiad needs to be summarized and promoted. This article summarizes the history of the IMO and reviews the practices of the IMO in China based on the literature. China uses a number of strategies to ensure outstanding results in the IMO, including: the selection of contestants from existing domestic programs (National High School Mathematics Joint Competition, Chinese Mathematical Olympiad, and National Training Team); a multi-level educational system based on school training; and the accumulation and publication of relevant learning materials. The outbreak of the novel coronavirus has affected the normal proceedings of the IMO, but China has implemented effective countermeasures. There are still some misunderstandings about the Mathematical Olympiad in China. By introducing prior contestants, who have participated in the IMO and made outstanding contributions, China can help the public better appreciate the Mathematical Olympiad. At the same time, the Chinese experience at the IMO is an important reference for other countries in organizing competition training and selecting and nurturing gifted students in mathematics.
In this paper, we introduce a class of 3-ary algebras, called the 3-Lie-Rinehart algebra, and we discuss the basic structure thereof. The 3-Lie-Rinehart algebras are constructed using 3-ary differentiable functions, modules of known 3-Lie algebras, and inner derivatives of 3-Lie algebras.
In this article, using methods such as the partial fraction method, we study a set of combined identities for an Euler-type summation. We calculate, furthermore, the finite summation form of the product of the high order shifted-harmonic number and the reciprocal of the binomial coefficient. By using special values for the parameters, interesting identities can be obtained.
Braided vector algebras are an important class of Hopf algebras in braided tensor categories. In this paper, it is shown that braided vector algebras are isomorphic to quantum vector spaces as associative algebras; hence, the algebraic structure of braided vector algebras and three equalities of the pair $ (R',R)$ are recovered from representations of quantized enveloping algebras $ U_q(\mathfrak g)$ .
In this paper, we use the logarithmic derivative lemma for several complex variables to extend the Milloux inequality to differential polynomials of entire functions. As an application, we subsequently apply the concept to two Picard-type theorems: (1) Let $ f $ be an entire function in $\mathbb{C}^{n}$ and $a, b\;(\neq 0)$ be two distinct complex numbers. If $ f\neq a, {\cal{P}}\neq b, $ then $ f $ is constant. (2) If $ f^{s}D^{t_{1}}(f^{s_{1}})\cdots D^{t_{q}}(f^{s_{q}})\neq b $ and $ s+ $ $ \sum_{j = 1}^{q}s_{j}\geqslant 2+\sum_{j = 1}^{q}t_{j}, $ then $ f $ is constant, where $ D^{k}f $ is the $ k $ -th total derivative of $ f $ and $ {\cal{P}} $ is a differential polynomial of $ f $ with respect to the total derivative.
This paper explores the relationship between the number of solutions and the parameter $ s $ of second-order discrete periodic boundary value problems of the form $\left\{ \begin{array}{ll} \Delta^{2} u(t-1)+f\Delta u(t)+g(t,u(t)) = s, \;t\in[1,T]_{\mathbb{Z}}, \\ u(0) = u(T-1),\;\Delta u(0) = \Delta u(T-1), \end{array} \right.$ where $g: [1,T]_{\mathbb{Z}}\times \mathbb{R}\to\mathbb{R}$ is a continuous function, $ f\geqslant0 $ is a constant, $ T\geqslant2 $ is an integer, and $ s $ is a real number. By using the upper and lower solution method and the theory of topological degree, we obtain the Ambrosetti-Prodi type alternatives which demonstrate the existence of either zero, one, or two solutions depending on the choice of the parameter $ s $ with fixed constant $ s_{0}\in \mathbb{R} $ .
Xiao introduced a series of singularity indices to survey hyperelliptic fibrations. However, it remains unknown whether the second singularity index, $ s_2 $ , is non-negative. In this paper, I demonstrate a series of examples of degeneration of curves where $s_2$ tends to $-\infty$ as the genus $g$ grows. Moreover, I obtain a lower bound for $s_2$ for a given genus $g$ , thereby confirming that the index $s_2$ of fibrations for genus $g=2,3,4$ is non-negative.
Existing methods of key points matching were invented for grayscale images and are not suitable for high resolution images. Mural images typically have very high resolution, and there may be areas with the same gray textures and different colors. For this special kind of image, this paper proposes a high-speed algorithm of key points matching for high-resolution mural images (NeoKPM for short). NeoKPM has two main innovations: (1) first, the homography matrix of rough registration for the original image is obtained by downsampling the image, which substantially reduces the time required for key points matching; (2) second, a feature descriptor based on gray and color invariants is proposed, which can distinguish different colors of texture with the same gray level, thereby improving the correctness of key points matching. In this paper, the performance of the NeoKPM algorithm is tested on a real mural image library. The experimental results show that on mural images with a resolution of 80 million pixels, the number of correct matching points per pair of images is nearly 100 000 points higher than that of the SIFT (Scale Invariant Feature Transform) algorithm, the processing speed of key points matching is more than 20 times faster than that of the SIFT algorithm, and the average error of dual images based on a single pixel of the image is less than 0.04 pixels.
Image stitching technology is one of the key technologies in the application of large-field microscopic digital images. The existing traditional image stitching method is to stitch in a fixed order after image registration, and once there is an error, it will be accumulated along a fixed path, thereby causing problems such as misalignment of subsequent images. In this study, through experimental analysis, a method for optimizing the stitching path of the large-field image was proposed, which greatly optimized the problems caused by error accumulation and registration failure, and effectively improved the stitching quality of the large-field microscopic digital image. This method can be used not only for the stitching of large-field microscopic images, but also for other types of stitching.
With the emergence of low-latency applications such as driverless cars, online gaming, and virtual reality, it is becoming increasingly difficult to meet users’ demands for service quality using the traditional centralized mobile cloud computing model. In order to make up for the shortages of cloud computing, mobile edge computing came into being, which provides users with computing and storage resources by migrating computing tasks to network edge servers through computation offloading. However, most of the existing work processes only consider single-objective performance optimization of delay or energy consumption, and do not consider the balanced optimization of delay and energy consumption. Therefore, in order to reduce task delay and equipment energy consumption, a multi-user joint computation offloading and resource allocation strategy is proposed. In this strategy, the Lagrange multiplier method is used to obtain the optimal allocation of computing resources for a given offloading decision. Then, a computation offloading algorithm based on a greedy algorithm is proposed to obtain the optimal offloading decision; the final solution is obtained through continuous iteration. Experimental results show that, compared with the benchmark algorithm, the proposed algorithm can reduce system costs by up to 40%.
With the continuous development of Internet technology, network security is garnering increasing attention. Network anomalous traffic detection can provide an effective guarantee for blocking network attacks. However, to accurately detect anomalous traffic in a network, analyzing large volumes of data is usually required. Analyzing this data not only consumes substantial computational resources and reduces real-time detection capability, but it may also reduce the overall accuracy of detection. To solve these problems, we propose a network anomaly traffic detection method based on ensemble feature selection. Specifically, we use five different feature selection algorithms to design a voting mechanism for selecting feature subsets. Three different machine learning algorithms (Naive Bayesian, Decision Tree, XGBoost) are used to evaluate the feature selection algorithm, and the best algorithm is selected to detect abnormal network traffic. The experimental results show that the runtime of the proposed method is 84.38% less than the original data set on the optimal feature subset selected by the proposed approach, and the average accuracy is 16.93% higher than that of the single feature selection algorithm.
The advanced capabilities of artificial intelligence (AI) have been widely used to process large volumes of data in real-time for achieving rapid response. In contrast, conventional methods for deploying various AI-based applications can result in substantial computational and communication overhead. To solve this problem, a deep model Edge-Cloud collaborative acceleration mechanism based on network compression and partitioning technology is proposed. This technology can compress and partition deep neural networks (DNN), and deploy artificial intelligence models in practical applications in the form of an Edge-Cloud collaboration for rapid response. As a first step, the proposed method compresses the neural network to reduce the execution latency required and generates a new layer that can be used as a candidate partition point. It then trains a series of prediction models to find the best partitioning point and partitions the compressed neural network model into two parts. The two parts obtained are deployed in the edge device and the cloud server, respectively, and these two parts can collaborate to minimize the overall latency. Experimental results show that, compared with four benchmarking methods, the proposed scheme can reduce the total delay of the depth model by more than 70%.
In recent years, non-volatile memory (NVM) has developed rapidly. Its advantages, among others, include: persistence, large capacity, low latency, byte addressing, high density, and low energy consumption — all of which have impacted current database system architecture. SQLite is a lightweight relational database widely used in embedded fields such as mobile platforms. It operates as a serverless, zero-configuration, transactional SQL database engine. It maintains a cache for each connection, which results in problems with large space overhead and data consistency detection. At the same time, it adopts a relatively simple serialized single-write transaction execution method and page-based logging, which offers low performance and write amplification in the journal mode and a storage space requirement in the WAL mode during execution. In order to address the above challenges, a new scheme of SQLite Cache based on non-volatile memory, SQLite-CC (Copy Cache), is constructed, which fully considers the hardware characteristics of non-volatile memory and ensures the atomicity of transactions using a CC-manager and by adding an updated page index to ensure the consistency of database files and cache. Benchmarking tests show that it can achieve the same concurrency performance as SQLite-WAL mode. Compared with the rollback mode, it improves the execution performance of transactions by 3 times, reduces latency by 40%, and effectively solves the issue of write amplification on disks.
With the rapid development of smart grids, the construction of new digital infrastructure has become one of the core businesses of power companies. Power companies’ governance and intelligent analytical capabilities enable opportunities for business model innovation, such as platform operation and value-added data realization. In the context of power digitization and intelligent governance, we use the robust random cut forest in this paper for transformer loss data anomaly intelligence detection. The algorithm divides sample points by random cutting to construct a random cut forest structure model by inserting and removing sample points in the structure; the anomaly score of a sample point is then given by the influence of complexity. This method is suitable for anomaly detection on real-time loss data and offers a high degree of credibility, effectiveness, and efficiency. An experiment of anomaly detection on real transformer loss data shows that the method is efficient and flexible. The accuracy, recall, and efficiency of the proposed method, moreover, is substantially better than alternatives.
Author name disambiguation is an important step in constructing an academic knowledge graph. The issue of ambiguous names is widely prevalent in academic literature due to the presence of missing data, ambiguous names, or abbreviations. This paper proposes an unsupervised author name disambiguation method, based on heterogenous networks, with the goal of addressing the problems associated with inadequate information utilization and cold-start; the proposed method automatically learns the features of papers with the ambiguous authors’ name. As a starting point, the method preprocesses strings of authors, organizations, titles, and keywords by lemmatization. The algorithm then learns the embedded representation of text features by the word2vec and TF-IDF methods and learns the embedded representation of structural features using the meta-path random walk and word2vec methods. After merging features by similarity of structure and text, disambiguation is done by a DBSCAN clustering algorithm and merging isolated papers. Experimental results show that the proposed model significantly outperforms existing models in a small dataset and in engineering applications for cold-start unsupervised author name disambiguation. The data indicates that the model is effective and can be implemented in real-world applications.
Network traffic anomaly detection based on feature selection has attracted great research interest. Most existing schemes detect anomalies by reducing the dimensionality of traffic data, but ignore the correlation between data features; this results in inefficient detection of anomaly traffic. In order to effectively identify various types of attacks, a model based on a self-attentive mechanism is proposed to learn the correlation between multiple features of network traffic data. Then, a novel multi-feature anomalous traffic detection and classification model is designed, which analyzes the correlation between multiple features of the anomalous traffic data and subsequently identifies anomalous network traffic. Experimental results show that, compared to two benchmark methods, the proposed technique increased the accuracy of anomaly detection and classification by a maximum of 1.65% and reduced the false alarm rate by 1.1%.
In this study, we analyzed the evolution of saltwater intrusion in Changjiang Estuary since the 1970s based on: salinity data collected at the Wusong, Gaoqiao, and Baogang stations; days of saltwater intrusion at the water intakes of the Wusong water plant, Chenhang reservoir, and Qingcaosha reservoir; river discharges at Datong station; and satellite remote sensing data of estuarine topography changes. The measured salinity changes at Wusong, Gaoqiao, and Baogang stations in the dry seasons showed that the saltwater intrusion in the Changjiang Estuary was serious in the 1970s, became weak in the 1980s, and was weak from 1990 to 1996. The peak salinity at Baogang station occurred prior to Wusong station, and the peak salinity at Wusong station occurred prior to Gaoqiao station; these observations indicate that the saltwater intrusion originated from upstream saltwater spilling over from the North Branch. The annual days of saltwater intrusion at the water intakes of the Wusong water plant, Chenhang reservoir, and Qingcaosha reservoir indicate that the saltwater intrusion was serious from 1974 to 1981 and particularly acute in 1974, 1979 and 1980; in these cases, the days of saltwater intrusion at the water intake of Wusong water plant exceeded 70 days. The saltwater intrusion was relatively weak from 1982 to 1995. The saltwater intrusion intensified from 1996 to 2002, and serious saltwater intrusion occurred in 1996, 1999, and 2001. The saltwater intrusion from 2003 to 2020 decreased significantly. The construction of the Three Gorges reservoir in 2003 and the cascade reservoirs in the upper reaches of the Changjiang Basin after 2003 resulted in a significant increase in river discharge during the dry season; this phenomenon was the main driver for the weakening saltwater intrusion. The changes in estuarine topography from 1974 to 2013 were detected by satellite remote sensing images; in particular, the North Branch was a wide river in the 1970s. With the successive reclamations of Yonglongsha, Xinglongsha, and Xincunsha, as well as the reclamation of the south shoal in the lower reaches of the North Branch, the North Branch became narrow and the tidal capacity decreased; the sequence of events subsequently led to the gradual weakening of saltwater spillover from the North Branch into the South Branch in a long time scale. The topography changes of the North Branch also explain the drivers for the serious saltwater intrusion that occurred in the 1970s and the relative weakening of saltwater intrusion over time, particularly since the beginning of this century. River discharge and estuarine topography changes are the main drivers for the long-term changes in saltwater intrusion in the Changjiang Estuary. With the construction of more reservoirs in the upper reaches of the Changjiang River and further shrinkage of the North Branch, saltwater intrusion will continue to weaken. These changes are conducive to the safety of freshwater resources in the Changjiang Estuary.