Submitted Date
Subjects
Authors
Institution
  • SteganoDDPM: A high-quality image steganography self-learning method using diffusion model

    Subjects: Computer Science >> Information Security Subjects: Computer Science >> Computer Application Technology submitted time 2024-04-23

    Abstract: Image steganography has become a focal point of interest for researchers due to its capacity for the covert transmission of sensitive data. Traditional diffusion models often struggle with image steganography tasks involving paired data, as their core principle of gradually removing noise is not directly suited for maintaining the correspondence between carrier and secret information. To address this challenge, this paper conducts an in-depth analysis of the principles behind diffusion models and proposes a novel framework for an image steganography diffusion model. The study begins by mathematically representing the steganography tasks of paired images, introducing two optimization objectives: minimizing the secrecy leakage function and embedding distortion function. Subsequently, it identifies three key issues that need to be addressed in paired image steganography tasks and, through specific constraint mechanisms and optimization strategies, enables the diffusion model to effectively handle paired data. This enhances the quality of the generated stego-images and resolves issues such as image clarity. Finally, on public datasets like CelebA, the proposed model is compared with existing generation model-based image steganography techniques, analyzing its implementation effects and performance parameters. Experimental results indicate that, compared to current technologies, the model framework proposed in this study not only improves image quality but also achieves significant enhancements in multiple performance metrics, including the imperceptibility and anti-detection capabilities of the images. Specifically, the PSNR of its stego-images reaches 93.14dB, and the extracted images’ PSNR reaches 91.23dB, an approximate improvement of 30% over existing technologies; the attack success rate is reduced to 2.4x10-38. These experimental outcomes validate the efficacy and superiority of the method in image steganography tasks.

  • Integrative Complexity Modeling in English and Chinese Texts based on large language model

    Subjects: Psychology >> Applied Psychology Subjects: Computer Science >> Computer Application Technology submitted time 2024-04-10

    Abstract: Integrative complexity is a concept used in psychology to measure the structure of an individual’s thinking in two aspects: differentiation and integration. The measurement of integrative complexity relies primarily on manual analysis of textual content, which can be written materials, speeches, interview transcript large language models, or any other form of oral or written expression. To solve the problems of high cost of manual assessment methods, low accuracy of automated assessment methods, and the lack of Chinese text assessment scheme, this study designed an automated assessment scheme for integrative complexity on Chinese and English texts. We utilized text data enhancement technique of the large language model and the model migration technique for the assessment of integrative complexity, and explored the automated assessment methods for the two sub-structures of integrative complexity, namely, the fine integration complexity and the dialectical integration complexity. In this paper, two studies are designed and implemented. Firstly, a prediction model for the integration complexity of English text is implemented based on the text data enhancement technology of large language model; secondly, a prediction model for the integration complexity of Chinese text is implemented based on the model transfer technology. The results showed that: 1) We used GPT-3.5-Tubo for English text data enhancement, a pre-trained multilingual Roberta model for word vector extraction, and a text convolutional neural network model as a downstream model. The Spearman correlation coefficient between this model’s prediction of integration complexity and the manual scoring results was 0.62, with a dialectical integration complexity correlation coefficient of 0.51 and a fine integration complexity Spearman correlation coefficient of 0.60. It is superior to machine learning methods and neural network models without data enhancement. 2) In Study 2, a model with the same structure as the neural network in Study 1 was established, and the final model parameters in Study 1 were also transferred to the model in this study to train the integration complexity prediction model based on Chinese text. In the case of zero samples, the Spearman correlation coefficients of the transfer learning model for integrative complexity are 0.31, the Spearman correlation coefficient of dialectical integration complexity is 0.31, and the correlation coefficient of fine integration complexity is 0.33, all of which are better than the model in the case of random parameters (integrative complexity: 0.17, dialectical integrative complexity: 0.10, fine integrative complexity: 0.10). In the case of small samples, the Spearman correlation coefficient of the transfer learning model was 0.73, with a dialectical integration complexity correlation coefficient of 0.51 and a fine integration complexity correlation coefficient of 0.73.

  • Exploration of the Integration and Application of Large Model and Standard Literature Knowledge Base

    Subjects: Computer Science >> Computer Application Technology submitted time 2024-04-10

    Abstract: In the context of artificial intelligence and big data technology, the use of large models and the construction of standard literature knowledge bases are of great value for scientific research innovation, knowledge mining, and information retrieval. The standard literature knowledge base provides solid support for the standardization and standardization of various industries. This study first explores the current status of standard literature, then builds a framework for integrating large models and standard literature knowledge bases based on retrieval enhancement, and proposes exploration of enhancement optimization in each stage. Finally, it looks forward to future research directions and application prospects.

  • The Revision and Validation of the Simplified Chinese Linguistic Inquiry and Word Count Dictionary 2024(SCLIWC2024)

    Subjects: Psychology >> Applied Psychology Subjects: Computer Science >> Computer Application Technology submitted time 2024-04-09

    Abstract: In recent years, the Linguistic Inquiry and Word Count (LIWC) tool has garnered increasing attention, offering the promise of objective, automated, and transparent psychological text analysis. This resurgence has reignited enthusiasm among psychologists for language analysis research. The recent revision of the LIWC-22 dictionary has introduced numerous variables aimed at assessing various socio-psychological structures, thus expanding the application potential of the LIWC tool. To further promote the cultural adaptation of the LIWC tool, we have revised and validated the Simplified Chinese Linguistic Inquiry and Word Count Dictionary 2024 (SCLIWC2024) to better align with the features of LIWC-22. In Study One, building upon the SCLIWC dictionary, we revised SCLIWC2024 by comparing it with the LIWC-22 and CLIWC2015 dictionaries. In Study Two, we conducted two experiments to validate the efficacy of SCLIWC2024 in detecting different psychological semantics in online texts, addressing crucial questions regarding how to more effectively utilize SCLIWC2024 for detecting the psychological semantics of short texts on social networking platforms.

  • Multimodal Physical Fitness Monitoring (PFM) Framework Based on TimeMAE-PFM in Wearable Scenarios

    Subjects: Computer Science >> Computer Application Technology submitted time 2024-04-07

    Abstract: Physical function monitoring (PFM) plays a crucial role in healthcare especially for the elderly. Traditional assessment methods such as the Short Physical Performance Battery (SPPB) have failed to capture the full dynamic characteristics of physical function. Wearable sensors such as smart wristbands offer a promising solution to this issue. However, challenges exist, such as the computational complexity of machine learning methods and inadequate information capture. This paper proposes a multi-modal PFM framework based on an improved TimeMAE, which compresses time-series data into a low-dimensional latent space and integrates a self-enhanced attention module. This framework achieves effective monitoring of physical health, providing a solution for real-time and personalized assessment. The method is validated using the NHATS dataset, and the results demonstrate an accuracy of 70.6% and an AUC of 82.20%, surpassing other state-of-the-art time-series classification models.

  • Terrain Point Cloud Inpainting via Signal Decomposition

    Subjects: Computer Science >> Computer Application Technology submitted time 2024-04-05

    Abstract: The rapid development of 3D acquisition technology has made it possible to obtain point clouds of real-world terrains. However, due to limitations in sensor acquisition technology or specific requirements, point clouds often contain defects such as holes with missing data. Inpainting algorithms are widely used to patch these holes. However, existing traditional inpainting algorithms rely on precise hole boundaries, which limits their ability to handle cases where the boundaries are not well-defined. On the other hand, learning-based completion methods often prioritize reconstructing the entire point cloud instead of solely focusing on hole filling. Based on the fact that real-world terrain exhibits both global smoothness and rich local detail, we propose a novel representation for terrain point clouds. This representation can help to repair the holes without clear boundaries. Specifically, it decomposes terrains into low-frequency and high-frequency components, which are represented by B-spline surfaces and relative height maps respectively. In this way, the terrain point cloud inpainting problem is transformed into a B-spline surface fitting and 2D image inpainting problem. By solving the two problems, the highly complex and irregular holes on the terrain point clouds can be well-filled, which not only satisfies the global terrain undulation but also exhibits rich geometric details. The experimental results also demonstrate the effectiveness of our method.

  • Implementation of Text Analysis and Processing for Japanese Articles Based on MeCab Library in Python

    Subjects: Computer Science >> Computer Application Technology submitted time 2024-04-04

    Abstract: Text analysis and processing have become increasingly important topics, and there are many examples of Chinese word segmentation in jieba. However, there is little research on Japanese language word segmentation. This article aims to introduce the MeCab library’s function of segmenting Japanese words in Python, and provide relevant case codes to implement Japanese word segmentation as needed.

  • The Impact of Zhong-yong Thinking Style on Mental Health using LLM: The Mediating Role of Moral Centrality

    Subjects: Psychology >> Applied Psychology Subjects: Computer Science >> Computer Application Technology submitted time 2024-03-23

    Abstract: In recent years, researchers have recognized the impact of Zhong-yong Thinking Style on mental health. However, it is not clear how Zhong-yong thinking style affects mental health through internal psychological mechanisms. Previous studies found that individuals with a better ability to coordinate agency (a motivation representing self-interest) and communion (a motivation representing altruism) tend to have a higher level of moral centrality. Moral centrality reflects the balance of internal motivation system, which can reduce the conflict between agency and communion, helping individuals reach a state that the opposing motivations support and energies each other. Moral centrality may play a potential mediating role in the impact of Zhong-yong thinking style on mental health. Although there are relatively mature methods for measuring individual moral centrality, it involves the complex task of coding values in personal strivings, making the measurement of moral centrality particularly complicated and labor-intensive. However, with the development of large language models(LLM) like ChatGPT, they have demonstrated excellent contextual comprehension skills and offered new possibilities for text analysis and coding work. Accordingly, this study intends to apply large language models to the coding work of psychological research, reduce the time and labor cost required in the process of measuring individual moral centrality, and explore how Zhong-yong thinking style affects individual mental health through moral centrality. Study 1 involves training GPT-3.5 Turbo to recognize values contained in personal strivings (achievement / power / universalism / benevolence) using differentiated prompts and evaluating its accuracy, precision, and recall rates, in order to obtain a model that meets the requirements for application. Study 2 applies above GPT-3.5 Turbo models in the process of measuring moral centrality, exploring how moral centrality mediates the impact of Zhong-yong thinking style on depression and anxiety. The findings are as follows: (1) The GPT-3.5 Turbo demonstrated an accuracy rate of not less than 0.80 in recognizing values of power, achievement, universlaism, and benevolence, showing the potential application of ChatGPT in psychological research; (2) Moral centrality played a mediating role in the impact of Zhong-yong thinking style on depression/anxiety. Specifically, individuals with a higher level of Zhong-yong thinking style could better integrate agency and communion, enhancing their moral centrality, and thereby reducing levels of depression/anxiety. In summary, this study utilized large language models to break through the technical limitations of traditional psychological research, exploring the mechanisms through which Zhong-yong thinking style affects mental health and verifying the mediating role of moral centrality. On the one hand, it proves the application potential of large language models in the field of psychological research. On the other hand, it deepens our understanding of the mechanisms through which Zhong-yong thinking style influence mental health, enriching the theoretical foundation of this field. It suggests that policymakers could use the advantages of Zhongyong thinking culture, advocating for values that emphasize individual development while also focusing on collective well-being, helping people improve moral centrality, thereby mitigating the negative impact of economic inequality on mental health.

  • Research on the Mechanism of the Impact of Income Distribution Inequality on Mental Health: The Mediating Role of Moral Centrality

    Subjects: Psychology >> Applied Psychology Subjects: Computer Science >> Computer Application Technology submitted time 2024-03-23

    Abstract: In recent years, researchers have increasingly recognized the impact of unequal income distribution on individual mental health. However, it is not clear how it affects mental health through internal psychological mechanisms. As the macro environment in which individuals live, economy shape people’s different values and make individuals have different levels of motivation orientation. Previous studies have indicated that individuals with a better ability to coordinate agency and communion tend to have a relatively high level of moral centrality. Moral centrality reflects the balance of internal motivation system, which can reduce the conflict between agency and communion, helping individuals reach a state that the opposing motivations support and energies each other. Thus, individuals are not only able to efficiently realize their personal values but also more easily allow for the attainment of eudaimonic well-being, thereby reducing the risk of mental health problems. Therefore, moral centrality may play a potential mediating role in the impact of income distribution inequality on mental health. Overall, with income distribution inequality as independent variables, this study aims to explore the mechanisms through which it affects mental health, by examining how income distribution influences individual moral centrality and, in turn, affect mental health. Our research not only enriches the theoretical foundation of the mental health field, but also provides a theoretical basis for interventions, and helps to formulate targeted strategies to improve the psychological well-being of the public. With the help of social media big data and natural language processing technology, we use posts made by regional microblogs to extract word frequency features representing the group’s moral centrality and group’s mental health level through the psychosemantic lexicon, and use panel data analysis to examine how the inequality in income distribution affects the negative emotions and suicide risk of the regional group through moral centrality. The results confirm that moral centrality plays a mediating role in the effect of regional income distribution inequality on group negative emotions/suicide risk, and that regions with higher income distribution inequality tend to be accompanied by lower levels of group moral centrality, which in turn leads to an increase in negative emotions/suicide risk among groups in the region.

  • Application of Deep Learning Methods Combined with Physical Background in Wide Field of View Imaging Atmospheric Cherenkov Telescopes

    Subjects: Astronomy >> Astronomical Instruments and Techniques Subjects: Physics >> Nuclear Physics Subjects: Computer Science >> Computer Application Technology submitted time 2024-03-10

    Abstract: The HADAR experiment, which will be constructed in Tibet, China, combines the wide-angle advantages of traditional EAS array detectors with the high sensitivity advantages of focused Cherenkov detectors. Its physics objective is to observe transient sources such as gamma-ray bursts and counterparts of gravitational waves. The aim of this study is to utilize the latest AI technology to enhance the sensitivity of the HADAR experiment. We have built training datasets and models with distinctive creativity by incorporating relevant physical theories for various applications. They are able to determine the kind, energy, and direction of incident particles after careful design. We have obtained a background identification accuracy of 98.6 %, a relative energy reconstruction error of 10.0 %, and an angular resolution of 0.22-degrees in a test dataset at 10 TeV. These findings demonstrate the enormous potential for enhancing the precision and dependability of detector data analysis in astrophysical research. Thanks to deep learning techniques, the HADAR experiment’s observational sensitivity to the Crab Nebula has surpassed that of MAGIC and H.E.S.S. at energies below 0.5 TeV and remains competitive with conventional narrow-field Cherenkov telescopes at higher energies. Additionally, our experiment offers a fresh approach to dealing with strongly connected scattered data.

  • Optimization of a prediction model of life satisfaction based on text data augmentation

    Subjects: Psychology >> Applied Psychology Subjects: Computer Science >> Computer Application Technology submitted time 2024-02-29

    Abstract: Objective With the development of network big data and machine learning, more and more studies starting to combine text analysis and machine learning algorithms to predict individual satisfaction. In the studies focused on building life satisfaction prediction models, it is often difficult to obtain large amounts of valid and labeled data. This study aims at solving this problem using data augmentation and optimizing the prediction model of life satisfaction. Method Using 357 life status descriptions annotated by self-rating life satisfaction scale scores as original text data. After preprocessing using DLUT-Emotionontology, EAD and back-translation method was applied and the prediction model was built using traditional machine learning algorithms. Results Results showed that (1) the prediction accuracy was largely enhanced after using the adapted version of DLUT-Emotionontology; (2) only linear regression model was enhanced after data augmentation; (3) rigid regression model showed the greatest prediction accuracy when trained by original data (r = 0.4131). Conclusion The improvement of feature extraction accuracy can optimize the current life satisfaction prediction model, but the text data augmentation methods, such as back translation and EDA may not be applicable for the life satisfaction prediction model based on word frequency.

  • Research on the influence of low-light conditions on deep learning object detection

    Subjects: Computer Science >> Computer Application Technology submitted time 2024-01-09

    Abstract: Object detection under low illumination conditions is an important task in image processing. The current research pay attention to reduce image noise by image enhancement, improve network structure and data sets to adapt to object detection under low illumination conditions. However, few people have studied the specific influence of low illumination conditions on object detection. Therefore, in this paper, we generate data sets that simulate low illumination conditions through algorithms. Then, we conduct object detection under different noise conditions and collect results, research the impact on object detection.

  • Confident Association for Long-term Tracking

    Subjects: Computer Science >> Computer Application Technology submitted time 2024-01-07

    Abstract: Aiming at the exponential growth of solution scale in multiple hypothesis tracking (MHT), a continuous consistency model (CCM) is proposed. The key to improve MHT performance is to improve the effi#2;ciency of branch management. However, due to the inevitable detector failure, when the tree is expanded and each detection is organized as the root node of the new tree, a large number of virtual nodes are used. This leads to rapid growth of branches. Different from previous MHT implementations, CCM divides detection into four categories, in#2;cluding continuous, left continuous, right continuous and discontinuous. Comparative experiments show that CCM has significantly improved the computational efficiency and obtained the most advanced results on MOT challenge benchmark.

  • A review of feature-level fusion algorithms

    Subjects: Computer Science >> Computer Application Technology submitted time 2024-01-07

    Abstract: In this paper, the classification of feature-level data fusion algorithms is summarized, and the distribution is summarized from the fusion algorithm based on probability and statistics, the fusion algorithm based on logical reasoning, the fusion algorithm based on feature extraction, the fusion algorithm based on search and the fusion algorithm based on neural network, and the future research direction of data fusion is summarized.

  • Label Recognition and Detection based on YOLO Algorithm

    Subjects: Computer Science >> Computer Application Technology submitted time 2024-01-07

    Abstract: The present study utilizes the YOLO visual algorithm for the recognition and detection of labels, providing a discourse on the experimental process and results.

  • APPLICATION AND IMPROVEMENT STRATEGIES OF THE SGT MODEL IN MAGNETIC SIGNAL ANOMALY DETECTION

    Subjects: Computer Science >> Computer Application Technology submitted time 2024-01-06

    Abstract: This report explores the application of the SGT model in the field of magnetic prospecting, with a special focus on its performance on the MGT, SNR0 and SNR5 datasets. The experimental results reveal that the SGT model suffers from high false alarm rate and large prediction bias when dealing with these datasets. To address the insufficient predictive and generalization abilities of the model, we designed a series of improvement experiments focusing on three aspects, namely, tuning parameter, optimizing the feature extraction method and modifying the continuity judgment.
    Among these three improvement methods, tuning parameter achieved about 0.5% performance improvement, and the methods of feature extraction optimization and orthogonal basis judgment instead reduced the prediction effect by 20%. Through code review and logical reasoning, we found that the problem stems from feature extraction incompatibility with the model. In order to adapt to the orthogonal basis algorithm, we propose an improvement idea: introduce many different types of features, including time-domain features, frequency-domain features, and statistical features, etc., and comprehensively utilize the information of these features to construct a more complex and comprehensive SGT model. In addition, the stacking module is introduced to take the prediction results of a single model based on different features as inputs, and generate a more accurate ultimate prediction through further learning and synthesis.
     

  • A Spatial Scene Classification Framework Based on Object Detection

    Subjects: Computer Science >> Computer Application Technology submitted time 2024-01-06

    Abstract: Spatial scene classification has long been a prominent area of research in the field of geographic information science. In the past, traditional approaches heavily relied on retrieval methods based on image features. However, given the rapid advancements in deep learning and artificial intelligence, the efficient classification of complex spatial scenes has become increasingly crucial. This paper presents a novel framework that combines object detection with knowledge graph to automate the process of spatial scene classification. Initially, the input images undergo processing using object detection techniques to identify key entities within the scenes. Subsequently, a knowledge graph, which encompasses various spatial scenes, entities, and their relationships, is utilized to identity spatial scene catogories. To validate the effectiveness of the framework, experiments were conducted using eight spatial scene categories as an example. The results demonstrated a high level of consistency with actual spatial types, thus affirming the efficacy of the framework and highlighting its potential application value in the domain of spatial scene classification.

  • Exploring diffusion models: a comprehensive review from theory to application

    Subjects: Computer Science >> Computer Application Technology submitted time 2024-01-06

    Abstract: Diffusion models are a powerful type of generative model capable of producing high-quality results in various fields including images, text, and audio. This review aims to summarize and analyze the latest research progress in diffusion models applied in the vision domain, including both theoretical and practical contributions in the field. Initially, the article discusses the characteristics and principles of three mainstream models: denoising diffusion probabilistic models, score-based diffusion generative models, and diffusion generative models based on stochastic differential equations. It also analyzes derivatives aimed at optimizing internal algorithms and improving sampling efficiency. Furthermore, the review provides a comprehensive summary of current applications of diffusion models, including computer vision, natural language processing, time series analysis, multimodal research, and interdisciplinary fields. Finally, based on current trends and challenges, it offers a forecast for the future direction of diffusion models, aiming to guide and inspire research in the field. This article is intended to provide researchers with a comprehensive overview of diffusion model research and application, emphasizing its significant role and potential in the field of Artificial Intelligence Generated Content (AIGC).
     

  • An Intelligent Detection Method for Pituitary Microadenoma Based on Dynamic Enhanced Magnetic Resonance Images

    Subjects: Computer Science >> Computer Application Technology Subjects: Medicine, Pharmacy >> Clinical Medicine submitted time 2024-01-06

    Abstract: Pituitary microadenomas are usually difficult to detect by non-contrast MRI, and the risk of misdiagnosis is higher and the number of cases is small, which makes the detection, segmentation and classification of pituitary microadenomas difficult. Based on the above problems, a computer-aided diagnostic system DCEPM-CAD based on dynamic enhancement sequence is proposed. While extracting the dynamic enhancement MR sequence timing information, the attention module of HRNetv2 was added to the backbone network to improve. In order to avoid the problem that pituitary microadenomas occupy too few pixels in the image to extract their relevant features, this paper also introduces the TecoGAN image super-resolution method to super-resolution the pituitary region image. In a total of 862 MR image datasets of 275 eligible participants, the diagnostic accuracy of DCEPM-CAD for pituitary microadenomas reached 77%. At the same time, significant results were achieved in the segmentation of pituitary and pituitary microadenomas, and the similarity coefficients of Dice reached 92.16 and 72.54, respectively.

  • Overview of deep learning theory and its application

    Subjects: Computer Science >> Computer Application Technology submitted time 2024-01-06

    Abstract: Deep Learning is a new research direction in the field of machine learning, which is introduced into machine learning to make it closer to the original goal -AI(Artificial Intelligence).
    Deep learning is the inherent law and level of learning sample data. The information obtained in these learning processes is very helpful for the interpretation of data such as text, images, and sounds. Its ultimate goal is to allow machines to analyze learning ability like humans and can recognize data such as text, images and sounds. It is a complex machine learning algorithm, which has achieved the effect in terms of voice and image recognition, far exceeding the previous related technologies, especially in searching technology, data mining, machine translation, natural language processing, multimedia learning, voice, recommendation and personalized technologies, and other related fields. This article discusses the theoretical knowledge of deep learning and investigates the application of the algorithm in various fields, to provide a certain reference for deep learning studies.