×

You are using an outdated browser Internet Explorer. It does not support some functions of the site.

Recommend that you install one of the following browsers: Firefox, Opera or Chrome.

Contacts:

+7 961 270-60-01
ivdon3@bk.ru

  • Methods for increasing the reliability of telecommunication systems in Turkmenistan

    This article explores methods for improving the reliability of telecommunication systems in Turkmenistan. The authors consider modern approaches to ensuring the stability and reliability of communication networks in the context of a rapidly changing technological environment. The article analyzes the main challenges faced by telecom operators in the country and proposes effective strategies to ensure the smooth operation of telecommunication systems. The results of the study allow us to identify key measures to improve the reliability of the communication infrastructure in Turkmenistan and ways to optimize user service processes.

    Keywords: communication infrastructure, trends, prospects, system reliability, mobile communications, evolution, 2G, 3G, 4G, network reliability

  • Method of building three-dimensional graphics based on distance fields

    This paper investigates the effectiveness of the distance fields method for building 3D graphics in comparison with the traditional polygonal approach. The main attention is paid to the use of analytical representation of models, which allows to determine the shortest distance to the objects of the scene and provides high speed even on weak hardware. Comparative analysis is made on the possibility of wide model detailing, applicability of different lighting sources, reflection mapping and model transformation. Conclusions are drawn about the promising potential of the distance field method for 3D graphics, especially in real-time rendering systems. It is also emphasized that further research and development in this area is relevant. Within the framework of this work, a universal software implementation of the distance fields method was realized.

    Keywords: computer graphics, rendering, 3D graphics, ray marching, polygonal graphics, 3D graphics development, modeling, 3D models

  • Forecasting the state of the vehicle sensor system

    The condition of a vehicle sensor system is an effective indicator used by many other vehicle systems. This article is devoted to the problem of choosing a forecasting method for vehicle sensors. Sensor data are considered as multivariate time series. The aim of the study is to determine the best forecasting model for the type of data under consideration. The LSTM neural network-based method and the VARMA statistical method were chosen for the analysis. These methods are preferred because of their ability to process multivariate series with complex relationships, their flexibility, which allows them to be used for series of varying lengths in a wide variety of scenarios, and the high accuracy of their results in numerous applications. The data and plots of computational experiments are provided, enabling the determination of the preferred option for both single-step and multistep forecasting of multivariate time series, based on the values of error metrics and adaptability to rapid changes in data values.

    Keywords: forecasting methods, forecast evaluation, LSTM, VARMA, time series, vehicle sensors system

  • Development of a client-server application for constructing a virtual museum

    The article describes the methodology for developing a client-server application intended for constructing a virtual museum. The creation of the server part of the application with the functions of processing and executing requests from the client part, as well as the creation of a database and interaction with it, is discussed in detail. The client part is developed using the Angular framework and the TypeScript language; the three-dimensional implementation is based on the three.js library, which is an add-on to WebGL technology. The server part is developed on the ASP.NET Core platform in C#. The database schema is based on a Code-First approach using Entity Framework Core. Microsoft SQL Server is used as the database management system.

    Keywords: client-server application, virtual tour designer, virtual museum, three.js library, framework, Angular, ASP.NET Core, Entity Framework Core, Code-First, WebGL

  • Using virtual reality technology to develop an algorithm to stop bleeding

    This article identifies the main advantages and disadvantages of using VR simulators to improve the professionalism of employees when performing work at an enterprise (organization). An analysis of existing projects used in various industries was carried out. A description of the developed first aid project is presented. The developed simulator allows you to practice skills in eliminating bleeding in different parts of the body: arm, leg, neck. While working on the project, the main factors influencing the quality of the developed VR simulator were identified. Thus, it was found that VR simulators are not capable of fully simulating fine motor skills of the hands. In addition, the simulator has restrictions on the position of the body in space. Despite the identified shortcomings, the use of the simulator allows you to practice key skills in providing first aid.

    Keywords: virtual reality, VR simulator, personnel training, professional activity, first aid, information technology, modeling.

  • Model distributions of mathematical statistics in granulometric analysis

    Model multiparameter distributions used in science and technology are analyzed and systematized. Particular attention is paid to the Rozin-Rammler-Weibull-Gnedenko and Kolmogorov-Gauss distributions, which adequately describe single and multiple crushing. The suitability of these distributions for modeling the granulometric composition of industrial waste from mechanical processing is confirmed by physical and computer experiments.

    Keywords: distribution function, mathematical model, generalized hyperbolic distributions, crushing, bulk medium, mechanical processing

  • Identification of physical and technical limitations in the optical method of granulometry of process dust

    Samples of metal dust generated during milling of gray cast iron were collected experimentally. The machine operating mode, dust collection points, and blowing conditions were varied in the experiments. To ensure the reliability of the result, the physical stage of the analysis of the dimensional characteristics of the dust was performed using two methods: sieving and direct optical measurements. Significant discrepancies in the statistical parameters obtained by different methods were revealed. A hypothesis explaining the differences was proposed and confirmed. An integrated approach to the physical stage of dispersion analysis of bulk media is recommended.

    Keywords: wood dust, parametric identification, sieve analysis, laser diffraction, micrographs, mathematical modeling, digital twin

  • Vision of the modern concept of medical decision support systems in the Russian healthcare system

    This article presents a study on the approach to the development of a medical decision support system (DSS) for the selection of formulas for calculating the optical strength of intraocular lenses (IOLs) used in the surgical treatment of cataracts. The system is based on the methods of building recommendation systems, which allows you to automate the process of choosing an IOL and minimize the risk of human error. The implementation of the system in the practice of medical organizations is expected to be highly accurate and efficient, significantly reduce the time allowed for decision-making, as well as improve the results of surgical interventions.

    Keywords: intraocular lens, ophthalmology, formulas for calculating optical strength, web application, machine learning, eye parameters, prognostic model, recommendation system, prediction accuracy, medical decision

  • Application of the methodology for assessing the reliability of a construction project taking into account uncertainty

    The situation of occurrence, identification and management of risks arising during the construction process is analyzed. Uncertainty of decision-making in construction projects involves the creation of methods that ensure the reliability of decisions and their effectiveness. Such a method was developed in the Russian Project Management Association. The paper provides an example of using this method on a real construction site. An analysis of risks arising during the implementation of a construction project was conducted, a risk map was created for this project and the PERT method was applied when creating a calendar plan.

    Keywords: uncertainty, risk event, probability, risk, damage, danger, reliability, risk analysis, investment and construction project, PERT method

  • Designing multicomponent simulation models using GPT-based LLM

    Modern simulation model design involves a wide range of specialists from various fields. Additional resources are also required for the development and debugging of software code. This study is aimed at demonstrating the capabilities of large language models (LLM) applied at all stages of creating and using simulation models, starting from the formalization of dynamic systems models, and assessing the contribution of these technologies to speeding up the creation of simulation models and reducing their complexity.The model development methodology includes stages of formalization, verification, and the creation of a mathematical model based on dialogues with LLMs. Experiments were conducted using the example of creating a multi-agent community of robots using hybrid automata. The results of the experiments showed that the model created with the help of LLMs demonstrates identical outcomes compared to the model developed in a specialized simulation environment. Based on the analysis of the experimental results, it can be concluded that there is significant potential for the use of LLMs to accelerate and simplify the process of creating complex simulation models.

    Keywords: Simulation modeling, large language model, neural network, GPT-4, simulation environment, mathematical model

  • Application of language neural network models for malware detection

    The growing popularity of large language models in various fields of scientific and industrial activity leads to the emergence of solutions using these technologies for completely different tasks. This article suggests using the BERT, GPT, and GPT-2 language models to detect malicious code. The neural network model, previously trained on natural texts, is further trained on a preprocessed dataset containing program files with malicious and harmless code. The preprocessing of the dataset consists in the fact that program files in the form of machine instructions are translated into a textual description in a formalized language. The model trained in this way is used for the task of classifying software based on the indication of the content of malicious code in it. The article provides information about the conducted experiment on the use of the proposed model. The quality of this approach is evaluated in comparison with existing antivirus technologies. Ways to improve the characteristics of the model are also suggested.

    Keywords: antivirus, neural network, language models, malicious code, machine learning, model training, fine tuning, BERT, GPT, GPT-2

  • Robotic demonstration training using a diffusion model and reinforcement learning algorithms

    The paper proposes a two-stage method of training a robot based on demonstrations, combining a diffusion generative model and online additional training using the method of Proximal Policy Optimization. In the offline phase, the diffusion model uses a limited set of expert demonstrations and generates synthetic "pseudo-demonstrations", allowing to expand the variability and coverage of the original dataset. This eliminates the narrow specialization of the strategy and increases its ability to generalize. In the online phase, a robot with a pre-trained strategy adjusts its actions in a real environment (or in a high-precision simulation), which significantly reduces the risks of unsafe actions and reduces the number of necessary interactions. Additionally, parametrically efficient pre-tuning has been introduced, reducing computational costs for online learning, as well as value guidance that focuses the generation of new data on areas of states and actions with high Q scores. Experiments on tasks from the D4RL set (Hopper, Walker2d, HalfCheetah) show that our approach achieves the greatest accumulated reward with lower computational costs compared to alternatives. T-SNE analysis indicates a shift of synthetic data in the area of space with high Q scores, contributing to accelerated learning. The results obtained confirm the prospects of the proposed method for robotic applications, where it is important to combine the limited volume of demonstrations, the safety and effectiveness of the online phase.

    Keywords: robot learning from demonstrations, diffusion generative models, reinforcement learning, Proximal Policy Optimization

  • An overview of the use of large language models in information tasks of decision support systems using the example of healthcare

    The article is devoted to the application of large language models (BMS) in information tasks of decision support systems using the example of healthcare. The key BAYAM architectures and their practical implementations are considered, as well as the capabilities of these models for natural language processing and medical data analysis. Special attention is paid to the role of BAM in automating decision-making processes, including optimizing access to knowledge from clinical recommendations. Examples of the use of BYAM in various fields of medicine are presented. In addition, the prospects for further development of BYAM in healthcare and related challenges are discussed.

    Keywords: big language models, natural language processing, decision support systems (DSS), industrial engineering, clinical guidelines, international classification of diseases

  • Methodology for optimal management of connections of participants in business events

    In the article, the authors propose a methodology for managing connections in a community based on the developed heuristic algorithm for optimal seating of participants in a multi-round networking event to maximize the likelihood of new partnerships within offline events. The seating algorithm is based on solving the NP-complete problem of the maximum clique. Optimization of the resulting solution is implemented based on the permutation crossover algorithm.

    Keywords: In the article, the authors propose a methodology for managing connections in a community based on the developed heuristic algorithm for optimal seating of participants in a multi-round networking event to maximize the likelihood of new partnerships withi

  • Quality Assessment of Natural Landscape Images Colorization based on Neural Network Autoencoder

    The article discusses the application of neural network autoencoder in the problem of monochrome image colorization. The description of the network architecture, the applied training method and the method of preparing training and validation data is given. A dataset consisting of 540 natural landscape images with a resolution of 256 by 256 pixels was used for training. The results of comparing the quality of the outputs of the obtained model were evaluated and the average coefficients of metrics as well as the mean squared error of the VGG model outputs are presented.

    Keywords: neural networks, machine learning, autoencoder, image quality analysis, colorization, CIELAB

  • Algorithm for generating three-dimensional terrain models in the monocular case using deep learning models

    The article is devoted to the development of an algorithm for three-dimensional terrain reconstruction based on single satellite images. The algorithm is based on the algorithmic formation of three-dimensional models based on the output data of two deep learning models to solve the problems of elevation restoration and instance segmentation, respectively. The paper also presents methods for processing large satellite images with deep learning models. The algorithm proposed in the framework of the work makes it possible to significantly reduce the requirements for input data in the problem of three-dimensional reconstruction.

    Keywords: three-dimensional reconstruction, deep learning, computer vision, elevation restoration, segmentation, depth determination, contour approximation

  • Research on the use of the MatLab Simulink software environment as a development environment for microcontrollers of the STM32 family

    This article presents a study aimed at evaluating the use of the Matlab Simulink software environment for the development of microcontroller systems of the STM32 family. The possibilities of Simulink in the field of modeling and testing control algorithms, as well as in generating code that can be directly applied to microcontrollers, are analyzed. The article describes in detail the process of creating conceptual models and their dynamic modeling. The advantages of using Simulink include speeding up the development process through automated assembly and the ability to adjust model parameters in real time. In addition, Simulink allows you to generate processor-optimized code, which significantly increases the efficiency of microcontroller systems. However, attention is also drawn to some limitations associated with using Simulink, such as the need to create a configuration file in STM32CubeMX and potential difficulties in configuring it. The article provides an in-depth analysis of the application of Simulink in the context of the development of STM32 microcontrollers and can become a key material for those who want to deepen their knowledge in this area.

    Keywords: model-oriented programming, MatLab, Simulink, STM32, microcontroller, code generation, automatic control system, DC motor

  • Design and Development of Automated Information Systems for Recording Parameters of the Technological Process of Production of an Industrial Enterprise

    The article is devoted to the creation of a highly specialized automated information system for recording the parameters of the technological process of production of an industrial enterprise. The development of such software products will simplify and speed up the work of technologists and reduce the influence of the human factor in collecting and processing data.

    Keywords: automated information system, system for recording production process parameters, Rammler-Breich diagram, role-based data access system

  • Multi-agent search engine optimization algorithm based on hybridization and co-evolutionary procedures

    The paper proposes a hybrid multi-agent solution search algorithm containing procedures that simulate the behavior of a bee colony, a swarm of agents and co-evolution methods, with a reconfigurable architecture. The developed hybrid algorithm is based on a hierarchical multi-population approach, which allows, using the diversity of a set of solutions, to expand the areas of search for solutions. Formulations of metaheuristics for a bee colony and a swarm of agents of a canonical species are presented. As a measure of the similarity of two solutions, affinity is used - a measure of equivalence, relatedness (similarity, closeness) of two solutions. The principle of operation and application of the directed mutation operator is revealed. A description of the modified chromosome swarm paradigm is given, which provides the ability to search for solutions with integer parameter values, in contrast to canonical methods. The time complexity of the algorithm is O(n2)-O(n3).

    Keywords: swarm of agents, bee colony, co-evolution, search space, hybridization, reconfigurable architecture

  • Planning and designing an organization's information system: stages and methods

    Information technologies are used in all spheres of modern society. Databases and document flow in organizations must be clearly organized, streamlined, and the interconnected work of company departments and services must be ensured to collect and process information flows and make effective management decisions. The article reflects the place of the stages of planning and designing information technologies and methods of their development in the algorithm for forming the strategy of an organization's IT project. Approaches to the formation of automated workplaces are shown using the example of the organizational and managerial structure of an enterprise. The services and departments of the organization responsible for planning, accounting, analysis and control of its financial results have been identified, which led to the conclusion about the directions for improving the quality of IT project development.

    Keywords: information system, IT project, planning, design, modeling, automated workstations

  • Using the determining the similarity of words method to evaluate text vectorization algorithms

    The article provides a brief description of the existing methods of vectorization of texts in natural language. The evaluation is described by the method of determining the similarity of words. A comparative analysis of the operation of several vectorizer models is carried out. The process of selecting data for evaluation is described. The results of evaluating the performance of the models are compared.

    Keywords: natural language processing, vectorization, word-form embedding, semantic similarity, correlation

  • Implementation of the LSH algorithm using Pl/PgSQL

    Our lives are permeated by data, with endless streams of information passing through computer systems. Today it is impossible to imagine modern software without interaction with databases. There are many different DBMSs depending on the purpose of using the information. The article discusses the Locality-sensitive hashing (LSH) algorithm based on the Pl/PgSQL language, which allows you to search for similar documents in the database.

    Keywords: LSH, hashing, field, string, text data, query, software, SQL

  • Telegram bot for remote server management using SSH and PostgreSQL database

    The article describes the process of developing and optimizing a Telegram bot for remote server management. As part of the study, an application was developed, implemented in Python using the Telegram API and using the PostgreSQL DBMS to store data as part of the workflow. The system is designed to optimize the workflow in the field of administration, providing users with an easy way to manage servers via Telegram. With the help of the created bot, users will be able to control servers, execute the necessary commands remotely, receive up-to-date information and monitor the status of servers in real time. The development of the bot included several stages, from architecture design to testing and implementation. Particular attention was paid to data security and system reliability to ensure stable operation and protection from unauthorized access. As a result, a functional and convenient tool was created that significantly simplifies administration tasks and increases the efficiency of server management.

    Keywords: python, ssh, it-service, telegram-api, postgresql, psycopg2, aiogram, sql, bot, administration, remote control

  • Algorithm for searching for patterns of building location using geoinformation technologies

    The paper proposes a method for identifying patterns of the relative positions of buildings, which can be used to analyze the dispersion of air pollutants in urban areas. The impact of building configuration on pollutant dispersion in the urban environment is investigated. Patterns of building arrangements are identified. The methods and techniques for recognizing buildings are examined. The outcomes of applying the proposed method to identify building alignments are discussed.

    Keywords: patterns of building location, geoinformation technologies, GIS, geoinformation systems, atmospheric air

  • A gaming approach to diagnosing depression based on user behavior analysis

    This article is dedicated to developing a method for diagnosing depression using the analysis of user behavior in a video game on the Unity platform. The method involves employing machine learning to train classification models based on data from gaming sessions of users with confirmed diagnoses of depression. As part of the research, users are engaged in playing a video game, during which their in-game behavior is analyzed using specific depression criteria taken from the DSM-5 diagnostic guidelines. Subsequently, this data is used to train and evaluate machine learning models capable of classifying users based on their in-game behavior. Gaming session data is serialized and stored in the Firebase Realtime Database in text format for further use by the classification model. Classification methods such as decision trees, k-nearest neighbors, support vector machines, and random forest methods have been applied. The diagnostic method in the virtual space demonstrates prospects for remote depression diagnosis using video games. Machine learning models trained based on gaming session data show the ability to effectively distinguish users with and without depression, confirming the potential of this approach for early identification of depressive states. Using video games as a diagnostic tool enables a more accessible and engaging approach to detecting mental disorders, which can increase awareness and aid in combating depression in society.

    Keywords: videogame, unity, psychiatric diagnosis, depression, machine learning, classification, behavior analysis, in-game behavior, diagnosis, virtual space