Preview

Modeling and Analysis of Information Systems

Advanced search
Vol 30, No 4 (2023)
View or download the full issue PDF (Russian)

Discrete Mathematics in Relation to Computer Science

288-307 263
Abstract

The paper presents a graph model of the functioning of a network with adaptive topology, where the network nodes represent the vertices of the graph, and data exchange between the nodes is represented as edges. The dynamic nature of network interaction complicates the solution of the task of monitoring and controlling the functioning of a network with adaptive topology, which must be performed to ensure guaranteed correct network interaction. The importance of solving such a problem is justified by the creation of modern information and cyber-physical systems, which are based on networks with adaptive topology. The dynamic nature of links between nodes, on the one hand, allows to provide self-regulation of the network, on the other hand, significantly complicates the control over the network operation due to the impossibility of identifying a single pattern of network interaction. On the basis of the developed model of network functioning with adaptive topology, a graph algorithm for link prediction is proposed, which is extended to the case of peer-to-peer networks. The algorithm is based on significant parameters of network nodes, characterizing both their physical characteristics (signal level, battery charge) and their characteristics as objects of network interaction (characteristics of centrality of graph nodes). Correctness and adequacy of the developed algorithm is confirmed by experimental results on modeling of a peer-to-peer network with adaptive topology and its self-regulation at removal of various nodes.

Theory of Software

308-339 423
Abstract

This work continues the series of articles on development and verification of control programs based on the LTL-specification. The essence of the approach is to describe the behavior of programs using formulas of linear temporal logic LTL of a special form. The developed LTL-specification can be directly verified by using a model checking tool. Next, according to the LTL-specification, the program code in the imperative programming language is unambiguously built. The translation of the specification into the program is carried out using a template. The novelty of the work consists in the proposal of two LTL-specifications of a new form — declarative and imperative, as well as in a more strict formal justification for this approach to program development and verification. A transition has been made to a more modern verification tool for finite and infinite systems — nuXmv. It is proposed to describe the behavior of control programs in a declarative style. For this purpose, a declarative LTL-specification is intended, which defines a labelled transition system as a formal model of program behavior. This method of describing behavior is quite expressive — the theorem on the Turing completeness of the declarative LTL-specification is proved. Next, to construct program code in an imperative language, the declarative LTL-specification is converted into an equivalent imperative LTL-specification. An equivalence theorem is proved, which guarantees that both specifications specify the same behavior. The imperative LTL-specification is translated into imperative program code according to the presented template. The declarative LTL-specification, which is subject to verification, and the control program built on it are guaranteed to specify the same behavior in the form of a corresponding transition system. Thus, during verification, a model is used that is adequate to the real behavior of the control program.

Algorithms in Computer Science

340-353 265
Abstract

Cartographic generalization includes the process of graphically reducing information from reality or larger scaled maps to display only the information that is necessary at a specific scale. After generalization, maps can show the main things and essential characteristics. The scale, use and theme of maps, geographical features of cartographic regions and graphic dimensions of symbols are the main factors affecting cartographic generalization. Geometric simplification is one of the core components of cartographic generalization. The topological relations of spatial features also play an important role in spatial data organization, queries, updates, and quality control. Various map transformations can change the relationships between features, especially since it is common practice to simplify each type of spatial feature independently (first administrative boundaries, then road network, settlements, hydrographic network, etc.). In order to detect the spatial conflicts a refined description of topological relationships is needed. Considering coverings and mesh structures allows us to reduce the more general problem of topological conflict correction to the problem of resolving topological conflicts within a single mesh cell. In this paper, a new simplification algorithm is proposed. Its peculiarity is the joint simplification of a set of spatial objects of different types while preserving their topological relations. The proposed algorithm has a single parameter — the minimum map detail size (usually it is equal to one millimeter in the target map scale). The first step of the algorithm is the construction of a special mesh data structure. On its basis for each spatial object a sequence of cells is formed, to which points of this object belong. If a cell contains points of only one object, its geometric simplification is performed within the bounding cell using the sleeve-fitting algorithm. If a cell contains points of several objects, geometric simplification is performed using a special topology-preserving procedure.

Theory of Data

354-365 246
Abstract

The development of fast algorithms for key generation, encryption and decryption not only increases the efficiency of related operations. Such fast algorithms, for example, for asymmetric cryptosystems on quasi-cyclic codes, make it possible to experimentally study the dependence of decoding failure rate on code parameters for small security levels and to extrapolate these results to large values of security levels. In this article, we explore efficient cyclic convolution algorithms, specifically designed, among other things, for use in encoding and decoding algorithms for quasi-cyclic LDPC and MDPC codes. Corresponding convolutions operate on binary vectors, which can be either sparse or dense. The proposed algorithms achieve high speed by compactly storing sparse vectors, using hardware-supported XOR instructions, and replacing modulo operations with specialized loop transformations. These fast algorithms have potential applications not only in cryptography, but also in other areas where convolutions are used.

Computing Methodologies and Applications

366-381 355
Abstract

This article describes a series of experiments in the Gazebo simulation environment aimed at studying the influence of external weather conditions on the automatic landing of an unmanned aerial vehicle (UAV) on a moving platform using computer vision and a previously developed control system based on PID and polynomial controllers. As part of the research, methods for modeling external weather conditions were developed and landing tests were carried out simulating weather conditions such as wind, light, fog and precipitation, including their combinations. In all experiments, successful landing on the platform was achieved; during the experiments, landing time and its accuracy were measured. The graphical and statistical analysis of the obtained results revealed the influence of illumination, precipitation and wind on the UAV landing time, and the introduction of wind into the simulation under any other external conditions led to the most significant increase in landing time. At the same time, the study failed to identify a systemic negative influence of external conditions on landing accuracy. The results obtained provide valuable information for further improvement of autonomous automatic landing systems for UAVs without the use of satellite navigation systems.

Artificial Intelligence

382-393 251
Abstract

This work is devoted to solving the problem of recognizing named entities for Russian-language texts based on the CRF model. Two sets of data were considered: documents on refinancing with a good document structure, semi-structured texts of court records. The model was tested under various sets of text features and CRF parameters (optimization algorithms). In average for all entities, the best F-measure value for structured documents was 0.99, and for semi-structured ones 0.86.

394-417 220
Abstract

The article is devoted to the task of sentiment detecton of Russian sentences, which is understood as the author’s attitude on the sentence topic expressed through linguistic expression features. Today most studies on this subject utilize texts of colloquial style, limiting the applicability of their results to other styles of speech, particularly to the publicism. To fill the gap, the authors developed a novel publisism sentences oriented sentiment detection algorithm. The algorithm recursively applies appropriate rules to sentence parts represented as constituency trees. Most of the rules were proposed by a philology expert, based on knowledge on the expression features from Russian philology, and algorithmized using constituency trees generated by the algorithm. A decision tree and a sentiment vocabulary are also used in the work. The article contains the results of evaluation of the algorithm on the publicism sentences corpus OpenSentimentCorpus, F-measure is 0.80. The results of errors analysis are also presented.

418-428 283
Abstract

In this work, we applied the multilingual text-to-text transformer (mT5) to the task of keyphrase generation for Russian scientific texts using the Keyphrases CS&Math Russian corpus. The automatic selection of keyphrases is a relevant task of natural language processing since keyphrases help readers find the article easily and facilitate the systematization of scientific texts. In this paper, the task of keyphrase selection is considered as a text summarization task. The mT5 model was fine-tuned on the texts of abstracts of Russian research papers. We used abstracts as an input of the model and lists of keyphrases separated with commas as an output. The results of mT5 were compared with several baselines, including TopicRank, YAKE!, RuTermExtract, and KeyBERT. The results are reported in terms of the full-match F1-score, ROUGE-1, and BERTScore. The best results on the test set were obtained by mT5 and RuTermExtract. The highest F1-score is demonstrated by mT5 (11,24 %), exceeding RuTermExtract by 0,22 %. RuTermextract shows the highest score for ROUGE-1 (15,12 %). According to BERTScore, the best results were also obtained using these methods: mT5 — 76,89 % (BERTScore using mBERT), RuTermExtract — 75,8 % (BERTScore using ruSciBERT). Moreover, we evaluated the capability of mT5 for predicting the keyphrases that are absent in the source text. The important limitations of the proposed approach are the necessity of having a training sample for fine-tuning and probably limited suitability of the fine-tuned model in cross-domain settings. The advantages of keyphrase generation using pre-trained mT5 are the absence of the need for defining the number and length of keyphrases and normalizing produced keyphrases, which is important for flective languages, and the ability to generate keyphrases that are not presented in the text explicitly.



Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.


ISSN 1818-1015 (Print)
ISSN 2313-5417 (Online)