Articles
In this paper, we offer an efficient parallel algorithm for solving the NP-complete Knapsack Problem in its basic, so-called 0-1 variant. To find its exact solution, algorithms belonging to the category ”branch and bound methods” have long been used. To speed up the solving with varying degrees of efficiency, various options for parallelizing computations are also used. We propose here an algorithm for solving the problem, based on the paradigm of recursive-parallel computations. We consider it suited well for problems of this kind, when it is difficult to immediately break up the computations into a sufficient number of subtasks that are comparable in complexity, since they appear dynamically at run time. We used the RPM ParLib library, developed by the author, as the main tool to program the algorithm. This library allows us to develop effective applications for parallel computing on a local network in the .NET Framework. Such applications have the ability to generate parallel branches of computation directly during program execution and dynamically redistribute work between computing modules. Any language with support for the .NET Framework can be used as a programming language in conjunction with this library. For our experiments, we developed some C# applications using this library. The main purpose of these experiments was to study the acceleration achieved by recursive-parallel computing. A detailed description of the algorithm and its testing, as well as the results obtained, are also given in the paper.
In this paper, we consider a schedulability analysis problem for real-time modular computer systems (RT MCS). A system configuration is called schedulable if all the jobs finish within their deadlines. The authors propose a stopwatch automata-based general model of RT MCS operation. A model instance for a given RT MCS configuration is a network of stopwatch automata (NSA) and it can be built automatically using the general model. A system operation trace, which is necessary for checking the schedulability criterion, can be obtained from the corresponding NSA trace. The paper substantiates the correctness of the proposed approach. A set of correctness requirements to models of system components and to the whole system model were derived from RT MCS specifications. The authors proved that if all models of system components satisfy the corresponding requirements, the whole system model built according to the proposed approach satisfies its correctness requirements and is deterministic (i.e. for a given configuration a trace generated by the corresponding model run is uniquely determined). The model determinism implies that any model run can be used for schedulability analysis. This fact is crucial for the approach efficiency, as the number of possible model runs grows exponentially with the number of jobs in a system. Correctness requirements to models of system components models can be checked automatically by a verifier using observer automata approach. The authors proved by using UPPAAL verifier that all the developed models of system components satisfy the corresponding requirements. User-defined models of system components can be also used for system modeling if they satisfy the requirements.
Software Defined Networking (SDN) is a promising paradigm for network management. It is a centralized network intelligence on a dedicated server, which runs network operating system, and is called SDN controller. It was assumed that such an architecture should have an improved network performance and monitoring. However, the centralized control architecture of the SDNs brings novel challenges to reliability, scalability, fault tolerance and interoperability. These problems are especially acute for large data center networks and can be solved by combining SDN controllers into clusters, called multi-controllers. Multi-controller architecture became very important for SDN-enabled networks nowadays. This paper gives a comprehensive overview of SDN multi-controller architectures. The authors review several most popular distributed controllers in order to indicate their strengths and weaknesses. They also investigate and classify approaches used. This paper explains in details the difference among various types of multi-controller architectures, the distribution method and the communication system. Furthermore, it provides already implemented architectures and some examples of architectures under consideration by describing their design, communication process, and performance results. In this paper, the authors show their own classification of multi-controllers and claim that, despite the existence of undeniable advantages, all reviewed controllers have serious drawbacks, which must be eliminated. These drawbacks hamper the development of multi-controllers and their widespread adoption in corporate networks. In the end, the authors conclude that now it is impossible to find a solution capable to solve all the tasks assigned to it adequately and fully. The article is published in the authors’ wording.
In the modern world, the efficient use of energy is an extremely important aspect of human activity. In particular, heat supply systems have significant economic, environmental and social importance for both heat consumers and heat supply organizations. The economic status of all participants in the heat supply process depends on the efficiency of the functioning of the heat supply systems. The reliability of the functioning of systems depends on vital processes such as the work of hospitals and industrial enterprises. With such a close network communication, reliable and efficient operation of power supply systems is critical. In this article, ways to improve the efficiency of heat supply systems are considered. A mathematical model for improved planning of heat supply systems by connecting the optimal set of new heat consumers is presented. For each single customer, when there is an alternative option for connecting this consumer to the existing heat network, it is possible to choose the only optimal solution. This becomes possible due to the restrictions and the procedure for selecting variants from a subset of binary variables corresponding to alternatives. The procedure for finding the optimal number of consumers for connection to the existing heat network is presented, which is the rationale for increasing the number of existing consumers of the heat network. The testing was carried out and the results of the mathematical model by an example of test heat networks are presented. Directions of further study of increasing the efficiency of heat supply systems and integrating the presented mathematical model with modern software complexes are determined.
Robert McEliece developed an asymmetric encryption algorithm based on the use of binary Goppa codes in 1978 and no effective key attacks has been described yet. Variants of this cryptosystem are known due to the use of different codes types, but most of them were proven to be less secure. Code cryptosystems are considered an alternate to number-theoretical ones in connection with the development of quantum computing. So, the new classes of error-correcting codes are required for building new resistant code cryptosystems. Non-commutative codes, which simply are ideals of finite non-commutative group algebras, are an option. The Artin–Wedderburn theorem implies that a group algebra is isomorphic to a finite direct sum of matrix algebras, when the order of the group and the field characteristics are relatively prime. This theorem is important to study the structure of a non-commutative code, but it gives no information about summands and the isomorphism. In case of a dihedral group these summands and the isomorphism were found by F. E. Brochero Martinez. The purpose of the paper is to study codes in dihedral group algebras as and when the order of a group and a field characteristics are relatively prime. Using the result of F. E. Brochero Martinez, we consider a structure of all dihedral codes in this case and the codes induced by cyclic subgroup codes.
We present the methodology, as well as results of measurements and evaluation of overhead created by concurrency and virtual memory. A special measurement technique and testbed were used to obtain the most accurate data from the experiments. This technique is focused on the measurements of the overall performance degradation that is introduced by concurrency in the form of lightweight user-level threads on IA-32 processors. We have obtained and compared results of the experiments in an environment with and without enabled virtual memory to understand what loss of performance is caused by virtual memory in itself, and how it affects the overhead associated with concurrency. The results showed that overhead of concurrency outweighs virtual memory overhead and that there is a complex dependency between them. The article is published in the author’s wording.
In this paper, we consider the classification and applications of switching methods, their advantages and disadvantages. A model of a computing grid was constructed in the form of a colored Petri net with a node which implements cut-through packet switching. The model consists of packet switching nodes, traffic generators and guns that form malicious traffic disguised as usual user traffic. The characteristics of the grid model were investigated under a working load with different intensities. The influence of malicious traffic such as traffic duel was estimated on the quality of service parameters of the grid. A comparative analysis of the computing grids stability was carried out with nodes which implement the store-and-forward and cut-through switching technologies. It is shown that the grids performance is approximately the same under work load conditions, and under peak load conditions the grid with the node implementing the store-and-forward technology is more stable. The grid with nodes implementing SAF technology comes to a complete deadlock through an additional load which is less than 10 percent. After a detailed study, it is shown that the traffic duel configuration does not affect the grid with cut-through nodes if the workload is increases to the peak load, at which the grid comes to a complete deadlock. The execution intensity of guns which generate a malicious traffic is determined by a random function with the Poisson distribution. The modeling system CPN Tools is used for constructing models and measuring parameters. Grid performance and average package delivery time are estimated in the grid on various load options.
ISSN 2313-5417 (Online)