Summary. Розширені англомовні анотації



1. Алєксєєва К. А., Берко А. Ю., Висоцька В. А. Інформаційна технологія управління Web-ресурсом на основі нечіткої логіки.


Andriy Berko1, Kateryna Alieksieieva2, Victoria Vysotska3
1General ecology and ecoinformation systems department,
2Social communications and Information Activity Department,
1Information Systems and Networks Department, 2Software Department,
Lviv Polytechnic National University, S. Bandery Str., 12, Lviv, 79013, UKRAINE,

Today a great part of information systems of various orientations is created using modern Internet technologies. The basis of such systems includes agreed and combined data set, which serves as a unified functional web-resource of information system. Usually, this set is focused on its composition data that is varied in content, format, filing and processing method. This resource can be unitary, consolidated, integrated, distributed, and strong or semi structured by way of its formation. One of the important tasks of the web-resource designing process is to provide a coherent representation, storage and interpretation of data at all stages of its processing. One of the recognized methods of achieving such unity of data is its integration.
The essential problems that are solved during commercial project lifecycle are planning and preparation of the project. Project planning and preparation process involves identifying a number of characteristics that define the technology, content, commercial and other features of the project. The peculiarity of control parameters of commercial web project is the difficulty of determining their exact values. In this case, the use of methods and means of control, which are based on the principles of situational control and fuzzy logic, is appropriate. Gained experience for today in this area allows applying the principles of fuzzy logic in project management problems.
Commercial web project is a creation of a specific Internet resource by developer on demand of the customer for further receiving the income or support of his main business. One of the essential features of commercial web-projects is their focus on of the use of the result by a wide range of consumers. Therefore, the commercial component of the success of the project depends on many external and internal factors. Performer, customer and target audience of consumers determine the values of parameters, which characterize factors that have an influence on the project. At the same time, such values cannot always be set or determined with sufficient accuracy and reliability. In this case, there is a need for making project decisions, planning and implementation of project activities taking into account the absence, incompleteness or inaccuracy of some data.
In this paper, fuzzy logic is selected as a tool that provides solution to the problem of commercial web projects management, taking into account all peculiarities of the project. It allows replacing the value of necessary parameters that are difficult or impossible to determine during management processes by their fuzzy linguistic counterparts. The main objective of this work is to determine the procedure and methods of formation and application of fuzzy data in technological tools of commercial web project management.
Principal provisions of the methods of design of information web-resources, based on the distribution of integration process data on syntactic, structural and semantic integration phases, have been developed in this work. This way of information resources design is a further development of the classical approach to integration. It allows creating data structure, methods of filing, processing and final values of their interpretation independently of each other. This ensures the highest level of compliance, integrity and relevance of the final information web-resource.
The data integration on the syntax level involves the development of a single system of a data values presentation in the process of resource design, within the resource and on the user interface level, as well as the exchange of this single system with other systems.
The integrated structure of information web-resource design allows the design of a unified heterogeneous data scheme that combines description of relational, poorly structured, active, streaming, and other types of data.
The integration of semantics is the final stage of the web-resource information system design aimed to develop agreed rules of thr interpretation, the perception and the use of data that is combined in this resource.
Using techniques developed in this paper provides additional opportunities to improve the quality of information web-resources, as well as the development and the implementation of effective CASE-tools for their design.
Key words: project, Project Management, data uncertainty, Project decision making, web resources, commercial content, content analysis, Internet Marketing, fuzzy data, fuzzy logic.
2. Арзубов М. В., Шаховська Н. Б. Розширення для пошуку і видалення шкідливої чи непотрібної інформації у інтернет-браузері.


Maksym Arzubov1, Natalya Shakhovska2
1,2Information Systems and Networks Department,
Lviv Polytechnic National University, S. Bandery Str., 12, Lviv, 79013, UKRAINE,

In this article software for searching and removing harmful or unnecessary information is described. Goals, objectives and scope of such an extension are defined.
The goal of this project is to develop extension for blocking untrusted web sites and intrusive advertisings. Also should be developed web server for keeping static data and validating information with logistic regression method.
A key opportunity created by the software is an extension for the browser, it will be integrated into your web browser that will not require a separate switching program.
Extension is written at high-level scripting language ECMAScript6. Unfortunately, this version of the language is not yet fully supported in all modern web browsers, whicht is why technologies such as Babel and WebPack were used. WebPack provides incremental building of your app with various plug-ins support. Babel transpiles your ECMAScript6 into ECMAScript5 which is fully supported in all major browsers.
Work of extension begins with downloading updated samples from the server. Further samples are stored in a browser repository and can be used locally. Extension uses regular expressions for information filtering. Each user can choose what information isblock, and which is not. Therefore, the final regular expression for different users may vary. Content script grabs data for filtration from the page and then passes it to the background script where information filters. The results are then passed back and relevant information is blocked.
Web server will keep static data and will be able to validate information with logistic regression method. To develop the server, technologies such as Node.js and Express.js were used. Data is stored in Redis database, which provides the ability to store data in the form of key / value in memory for their fast access. With asynchronous model Node.js querying and storing data in memory, the server will be able to cope with large loads.
For server side information validation logistic regression method was used. It is based on the same data that is used in extension regular expressions, which are used to filter information in web browser. However, this method allows us even to filter data that is not in our database. It checks data for similarity with data that we already have in our database. This method allows us not to store all the information to block.
The result of this work is browser extension that improves user web browsing experience, and web server that is used to store lists of sites in the database Redis with the possibility to validate a link to a web site.
Obtained numerical results are presented in graphs and analyzed in terms of accuracy and speed.
Key words: Chrome Extension, Node.js, express, JavaScript I/O, ECMAScript6, Babel, WebPack, Redis, logistic regression, gradient descent method.

3. Буров Є. В., Пасічник В. В. Програмні системи на базі онтологічних моделей задач.


Yevhen Burov1, Volodymyr Pasichnyk2
Information Systems and Networks Department,
Lviv Polytechnic National University, S. Bandery Str., 12, Lviv, 79013, UKRAINE,

The increased mobility of business processes today relies on extensive use of software and in turn puts high demands on the quality of software and its ability to be quickly and accurately adapted to the changes in the business environment. This is especially true for software with requirements changing over time, and such that its structure and functions must constantly meet the conditions of its operation environment (class E by Lehmann). It is well known that the evolution of class E software results in increased complexity and multiple errors, quality and functional integrity deterioration.
A promising approach to solving the problem of adapting the software to changes in its operational environment is the use of ontological modeling. Unlike classical modeling approaches based on a model compilation or rules processing, ontological modeling is building formal domain model (ontology) that can be reused to create other software for the same domain.
To build a software using an ontological approach is advisable to choose for modeling a task being the smallest identifiable and necessary part of any process. The concept of task is defined in the literature as an identified problem situation with defined conditions (data) and purpose.
In the article the theoretical principles of knowledge representation and processing in software based on ontological task models were developed. A formal model of knowledge representation was built. Model interaction mechanism using factual context is proposed. The methodology of complex ontology management using ontological task models is defined.
Given a mutually complementing nature of declarative and procedural approaches, the article proposes their combination into a single approach in knowledge representation. The general ontology defines a set of entities, relationships and interpretations for the entire domain. The ontological task model uses the components of the ontology and determines the concepts, relations, limitations and actions in the context of a particular task. For formal model construction, an approach of Coo’s algebra of systems was applied. It defines the algebraic system as a combination of several algebraic domains.
The article shows how the usage of ontological task models simplifies the management of a complex ontology. The methodology for ontology creation based on ontological analysis and modeling of the system’s tasks was developed. The main advantage of this methodology over another is the simplification of ontology creation process by specifying criteria for inclusion of entities and relations in the ontology, iterative building process and definition of objective criteria for ontology validation.
The software for ontological task modeling consists of the knowledge base, which contains a database of facts, ontology and the repository of models. The database keeps facts about objects and events in outside world according to modeled software functions. They all are represented as instances of certain classes of the ontology. Therefore, the ontology contains a domain model represented as a taxonomy of classes. Ontological models encapsulate the knowledge about ways to perform tasks and solve problems. They are initialized with facts from the database, creating fact-models.
The methods of ontological models usage for performing tasks in domains of software development, access control, decision support and tourism were developed. In particular, the feasibility of proposed ontology creation methodology was demonstrated using an example of software testing domain.
In the article, ontological models are used for automating tasks of controlling access to resources in information systems. It is shown that in comparison with known methods DAC, MAC, RBAC, ABAC method that uses an ontological model provides dynamic granting and withdrawal of access rights in the context of business processes running in the system, lacks user empowerment growth over time, simplifies rights management process and documents and justifies all operations.
The method of using ontological task models for automated testing of nightly software builds was developed. This method also demonstrates the organization of models interaction in the multistage business process of automated testing.
The article describes the architecture and functionality of the developed prototype of software systems modeling environment based on ontological models. In the role of the software platform for prototyping has been chosen language Python (version 2.7) and the graphics library PyQt, which is a porting on the Python platform of the open and free library Qt.
Factors that influence the effectiveness of ontological task models usage for building software products were elucidated and analyzed. The formulas for estimation of quality characteristics increase in different stages of software life cycle were proposed.
Key words: ontology, ontological model, software system, adaptation

4. Верес О. М. Аспекти прояву невизначеності в процесах розроблення систем підтримки прийняття рішень.


Oleh Veres
Information Systems and Networks Department,
Lviv Polytechnic National University, 12 S. Bandera Str., Lviv, 79013, UKRAINE,

Stabilization of the economy leads to increased competition and the increased importance of the right decisions for successful businesses. One of the key success factors in business management, and in everyday life is the speed and quality of decisions.
Decision Support Systems (DSS) – Information Systems, the most suitable for the challenges of everyday administrative activity and a tool that enables managers to make informed and effective management decisions. DSS enables real-time, automatically analyze large amounts of information. With DSS decided unstructured and semistructured the multicriterion problem. DSS – is an interactive automated system that helps the person who decides to use the data and models to identify and solve problems and make decisions. These enterprise systems running interactive queries, modeling and situation reports form on-line. The aim of DSS – improving decisions.
This article describes the classification and approaches to the construction of DSS that take into account various aspects of the uncertainty of the development of DSS. Proposed and described features of the classification, improved generalized classification of DSS. The analysis of the types of architecture, architecture reviewed information resource decision support systems, based on the principles of building a data warehouse. The analytical description of the conceptual approaches to building models and considered approach to the design of the main components of decision support. Building a DSS based on data warehouse (DW) requires new technological solutions. Based on the generalized conceptual model of DSS built a complete corporate structure of the DSS, which corresponds to the three-tier architecture based on the information storage. Technology OLAP technology closely associated with diabetes and building predictive processing – Data Mining. Therefore, the best option is a comprehensive approach to their implementation.
To address the universality of the DSS, the analysis of the use of object-oriented approach to quickly and easily design the DSS, based on using multiple structures of different types. A three-tier architecture, conceptual model of DSS, which reflects the development of reusable components with different nature, namely: the structure of the decision-making situation, structure models of decision-making, construction of patterns and image patterns develop the user interface. Described types of reusable components and custom layout sequence DSS. This approach can be seen as designing higher-level structures that consist of a lower level and recommendations for using these structures to develop a specific DSS.
One of the advantages of using Ontologies as an instrument of knowledge is a systematic approach to the study of the subject area. Built ontology data cleaning techniques for methodological systematization of functional elements in the implementation model of DSS.
To resolve this problem of uncertainty in the application of information technology for Big Data built its formal model and described the main structural elements. Features of the application of Big Data as information technology to build DSS. Proposed and described components of generalized formal model of Big Data.
Key words: data, classification, object-oriented approach, architecture, structure, ontology, paradigm, Data Warehouse, decision making, Decision Support System.

5. Висоцька В. А. Аналітичні методи опрацювання інформаційних ресурсів у системах електронної контент-комерції.


Victoria Vysotska
Information Systems and Networks Department,
Lviv Polytechnic National University, 12 S. Bandera Str., Lviv, 79013, UKRAINE,

The rapid development of the Internet contributes to the increase in the demand for the efficient data of the production / strategic nature and implementation of new forms of information services through modern information technology (IT) of e-commerce. Documented information prepared in accordance with users needs is a commercial content. Today e-commerce is a reality and a promising business process. The Internet is the business environment, and commercial content is a commodity with the highest demand and selling rate. It is also the main object of the electronic content commerce processes. Comercial content can be immediately ordered, paid and got on-line as a commodity. The entire spectrum of commercial content is sold via the Internet – scientific and publicistic articles, music, books, movies, pictures, software etc. Well-known corporations that implement electronic content commerce are Google through Google Play Market, Apple – Apple Store, Amazon – Most of the decisions and researchs are conducted at the level of specific projects. Electronic content commerce Systems (ECCS) are built on the closed principle as non-recurrent projects. Modern ECCS are focused on the the commercial content realization that is conducted outside the system.The design, development, implementation and maintenance of SECC are impossible without the use of modern methods and information technologies of formation, management and maintenance of commercial content.
The development of the technology of information resources processing is important in view of such factors as lack of theoretical grounding of methods for the commercial content flows study and the need for unification of software processing methods of information resources in ECCS. A practical factor of the processing of information resources in ECCS is related to the solution of problems of formation, management and support of growing volumes of commercial content in the Internet, rapid development of e-business, widely spreaded availability of the Internet, the expansion of the set of information products and services, and the increase in the demand for commercial content. Principles and IT of electronic content commerce are used while creating on-line stores (selling of eBooks, Software, video, music, movies, picture), on-line systems (newspapers, magazines, distance education, publishing) and off-line selling of content (copywriting services, Marketing Services Shop, RSS Subscription Extension), cloud storage and cloud computing. The world's leading producers of means of processing of information resources, such as Apple, Google, Intel, Microsoft, Amazon are working in this area.
The theoretical factor of information resources processing in ECCS is connected with the development of IT processing of commercial content. In scientific studies of D. Lande, V. Furasheva, S. Braychevskoho, A. Grigoriev mathematical models of electronic processing of information flows are investigated and developed. G. Zipf proposed an empirical law of distribution of word frequencies in natural language text content for its analysis. In the works of B. Boiko, S. McKeever, A. Rockley models of the life cycle of content are developed. The methodology of content analysis for processing textual data sets was initiated and developed by M. Weber, J. Kaiser, B. Glaser, A. Strauss, H. Lasswell,
O. Holsti, Ivanov, M. Soroka, A. Fedorchuk. In the works of V. Korneev, A. F. Gareev, S. V. Vasyutina, V. V. Reich were proposed methods of intellectual processing of text information. EMC, IBM, Microsoft Alfresco, Open Text, Oracle and SAP have developed specification of Content Management Interoperability Services based on Web-services interface to ensure interoperability of electronic content commerce system management. From the scientific point of view, this segment of IT has not been investigated enough. Each individual project is implemented almost from the very beginning, in fact, based on the personal ideas and solutions. In literature, very few significant theoretical studies, research findings, recommendations for the design of ECCS and processing of information results in such systems are highlighted. It has become of urgent importance to analyze, to generalize and to justify existing approaches to implementation of e-commerce and ECCS building.The actual problem of the creation of technological products complex is based on the theoretical study of methods, models and principles of processing information resources in ECCS, based on the principle of open systems that allow to manage the process of increase in sales of commercial content. The analysis of the factors enables us to infer the existence of the inconsistency between the active development and extension of IT and ECCS on the one hand, and the relatively small amount of research on this subject and their locality on the other. This contradiction raises the problem of containment of innovation development in the segment of electronic content commerce through the creation and introduction of the appropriate new advanced IT that affects negatively the growth of this market. Within this problem there is an urgent task of developing scientifically based methods of processing information resources of electronic content commerce, and building process on the basis of software for the creation, dissemination and sustainability of ECCS. In this paper a study to identify patterns, characteristics and dependencies in the processing of information resources in ECCS was carried out.
The article discusses the development of unified methods and software tools for processing information resources in the electronic content commerce systems. The main problems of electronic content commerce are analyzed and functional services of commercial content management are explored. The proposed method gives an opportunity to create an instrument of information resources processing in electronic commerce systems. It also enables the implementation of the commercial content management subsystem.A new detailed classification of electronic content commerce systems is proposed. A model of electronic content commerce systems is proposed. The models of information resource processing in electronic content commerce systems are proposed. Architecture and models of electronic content commerce systems are built. A new approach to application and implementation of business processes is formulated for the construction of systems of electronic content commerce. A complex method of content creation, the operational method of content management and complex method of content support are developed. Software tools for content creation, management and support are developed. Design and implementation methods of electronic content commerce systems are based on online newspapers, which reflect the results of theoretical research, are developed. From the perspective of a systemic approach, the principles of applying information resources processing in electronic content commerce systems for content lifecycle implementation made the development of methods for the commercial content formation, management and support possible. An integrated method of commercial content formation for the time and resources reduction of content production is developed. This makes it possible to create a means of information resources processing and implement subsystem of automatically generated content. A method of commercial content management for the time and resources reduction of content sales was created, which makes it possible to implement commercial content management subsystem. A method of commercial content support for the time and resource reduction of the target audience analysis in electronic content commerce systems is implemented, which makes it possible to develop a commercial content support subsystem.
Key words: Web resources, content, commercial content, information resource, business-process, content management system, content lifecycle, Internet newspaper, content analysis, content monitoring, content search, electronic content commerce system.
6. Гасько Р. В., Висоцька В. А., Чирун Л. Б. Інформаційна система аналізу психологічного стану особистості.


Ruslan Hasko1, Victoriya Vysotska2, Liliya Chyrun3
Information Systems and Networks Department,
Lviv Polytechnic National University, 12 S. Bandera Str., Lviv, 79013, UKRAINE,

For the modern professional psychological culture, which consists of three main components: self-knowledge, knowledge of another person, culture, behavior and communication, not less important than the knowledge of a personal computer or foreign language. Success cases will largely depend on psychological human culture. Even more important is the issue in the era of social networking.
Social networks are becoming increasingly popular. A huge number of people have accounts in several of them. Actively using social networks, a man announces himself a lot of different information. In today's world accounts in social networks are becoming more source of information on each specific person. There is very little information resources and programs that help identify psychological or emotional state using social networks. Basically it comes down to applications or programs that offer testing to determine your character, “compatibility” with friends like. But they all have one major flaw – they are not automated.
The idea of the possibility to assess the personality of its activity in social networks more exciting researchers. Researchers have set the goal to test whether based on analysis profile on Facebook to determine the severity of Big Five traits of its owner, predict the likelihood of hiring that person to work, as well as future performance. According to the authors, many personal characteristics are shown in profile Facebook. The research results suggest that many of the problems in assessing some specific characteristics of the person can go to the analysis of its profile and activity in social networks. It will do so, not man computer.
The problem of psychological analysis is very relevant in today's world, especially in the period of development of information technologies and social networks. However, the software market at the moment there are no programs or information systems that comprehensively and fully engaged to this issue. The goal of research in this area is to assess the psychological status of contemporary society. Can create serious information systems, based on information from social networks will be able to identify and predict the so-called “temperature” – the general state of society at a time.
Purpose is to create an information system that would analyze personal information (messages in social networks, tweets, etc.) and based on this analysis, will create a psychological portrait of a man and do some conclusions and recommendations.
The choice of the software design methodology is the key to developing the information system analysis of the psychological state of the person. Through choosing the right methodology can develop the best options for development of the information system. This information system will be focused on the use of the Internet, so when choosing tools and means of implementation considered those technologies that will implement the software. To write a program has been selected language JavaScript, which in recent years are popular among web developers. To create a page layout selected following technologies: HTML5, CSS3, JavaScript. To create a server side should use technologies such as PHP programming language and command MySQL. Implementation of the software carried out through the following three software components: HTML, CSS and Javascript. HTML needed to create the site. CSS – language that sets look website; Javascript – programming language, which is necessary for the implementation of which is impossible to achieve in CSS and to provide website interactivity.
In the course of the work developed information system, through which you can conduct a psychological analysis of personality, using her messages from social networks. The system helps automate the process of collecting information and results.
Key words: information resources, commercial content, content analysis, content monitoring, content search.

7. Демчук А. Б. Координація процесу тифлокоментування.


Andriy Demchuk
Information Systems and Networks Department,
Lviv Polytechnic National University, 12 S. Bandera Str., Lviv, 79013, UKRAINE,

The development of mathematical support process of typhlocomment are discribed. To do this, the theory of coordination are used. There solving related problems such as intervention moments (before a decision (based on the prediction of system behavior) and after the decision (which creates a lower subsystems such an effect, which leads to improved behavior of the system as a whole)), mutual dependence levels (advisory and the manager methods of exposure to lower levels of subsystems, and the possibility of simultaneous use in some cases), coordinated multilevel system defined properties such as global and local target information system functions. The structure of the coordination problem is to obtain a global solution for the problem, which is the result of coordination solutions subtasks among themselves, and this is the main criterion for the goal coordinated multilevel system.
When researching the problem of access of weak-sight people to the video content, it is required to understand that more than a most part of the information is provided to the viewer in the form of an image. Blind people hear all words of actors, sounds of the environment, processes at the screen, but it is difficult for them to identify the person to whom the specific words belong, what happens with heroes in the specific moment of scene, it is difficult for them to understand reaction of actors, which usually express with the help of movements or mimics.
A significant number of systems that have practical value, belong to the class of systems, which are called large or complex. Managing such systems are too complicated task for a single governing body, which has limited capacity processing information. Therefore it can be solved in parallel. That is, the overall control task is divided into several sub-tasks solved respective governing bodies. This partition management tasks into subtasks called decomposition. The important point decomposition approach is the possibility of parallel computing, when there is a simultaneous solution of several local problems.
Decomposition task management leads to problems of coordination. The problem is to create a mechanism to ensure consistency of the subsystems that operate autonomously. Understood as the coordination of global implementation constraints and before the formation of the subsystems goals that agree with the global objectives of the system. To make such coordination more effective to have a special coordinating authority than exercise direct exchange of information between all governing bodies, leading to increased pressure on each governing body. This coordinating body has priority over local control authorities, leading to the hierarchical structure of control of complex objects.
Multi system consists of two levels of hierarchy. At the lowest level are subsystems that solve individual subtasks. Each of these subsystems selects typhlocomment video for his part, that in turn consists of a hierarchy of layers. The second level contains the coordination subsystem.
Coordinated multi-information system is determined by the special properties of global and local target function information system. Depending on what principle underlies the coordination of multi-information system (solving interaction or interaction forecasting), the target functions include different requirements. Multilevel coordinated information system based on the principle of solving interactions requires absolute consistency interlevel objective functions subtasks.
The coordination process of typhlocomment – the system chooses to place in the video content available for typhlocomments (comments relevant to people with visual impairments, which give them an understanding of video scene that at any given time broadcast on the screen), later, coordinator “means” these places for insert typhlocomments.
Key words: typhlocomment, audiodescription, coordination, videocontent, IT, videocontent for sightless.

8. Євланов М. В., Васильцова Н. В., Панфьорова І. Ю. Моделі і методи синтезу опису раціональної архітектури інформаційної системи.


Maksym Ievlanov, Nataliia Vasyltsova, Iryna Panforova
Kharkiv National university of Radio Electronics, UKRAINE,

Purpose of the article – development of models and methods of synthesis description rational architecture of information systems based on formal models of requirements to information system.
The article solves the problem of constructing mathematical models and methods that can solve the problem of synthesis of rational description of the functional structure of the information system.
Object of study in this article are methodologies, architectural frameworks, and information technology design of information systems aimed at the analysis of requirements to information systems and synthesis of functional structures of these systems on this analysis.
Subject of study in this article are mathematical models and methods of solving the problem describing rational synthesis of architecture information system.
In this article were obtained the following results:
– the description of the functional requirements for the information system level knowledge as a fragment of a semantic network that consists of frames, interfaces and connections between them;
– first developed the model describe architecture information system as a semantic network that consists of a set of representations of the functional requirements at the level of knowledge;
– first developed the method of synthesis variants of descriptions architecture information system based on modified algorithm CLOPE, which provides a description of the information system architecture of withdrawal functional requirements that duplicate one another;
– further developed game-theoretical model describing the synthesis of rational architecture information system, for which the payoff function by Provider and User of IT-services, their winnings matrix and search method for situations Nash equilibria in pure strategies of Provider and User of IT-services;
– modified system design limitations that should be considered when searching for a rational description of architecture information system by introducing evaluation functions of the amount of work to create an information system for the description of rational architecture;
– introduced the term “ontological point” model and its description as the basic unit of evaluation of the amount of work to create an information system for the description of rational architecture;
– first developed the method of synthesis of ontological descriptions points as artifacts to automate solving the problem of evaluating the cost of meeting IT project of information system;
– introduced evaluation function defined scope of work to create an information system based models descriptions ontological points.
The practical value of this article – defines the formal basis of information technology accelerated development of information systems, which can significantly reduce the development or modification of information systems by processing functional requirements for the system and re-use knowledge gained from these requirements.
All scientific and practical results obtained by the author in person.
Using game-theoretic models to address the task of synthesis of the information system architecture description can provide significant benefits over existing approaches to solve this task due to the use of mixed strategies by which Provider offers the User IT-services are not typical functional modules created information system, and some IT-services that make up the functional modules of the system to be fully adapted to the requirements of User. Thus, it is possible to form at the formal level description created information system architecture as a set of artifacts that can then be used as a process specification and automated synthesis of database and software created information system.
Key words: information system, architecture, requirement, IT-accommodation, semantic network.

9. Камінський Р. М., Бігун Г. Побудова рекурентних діаграм коротких часових рядів засобами MS Excel.


Roman Kaminsky1, Halyna Bigun2
Information Systems and Networks Department,
Lviv Polytechnic National University, S. Bandery Str., 12, Lviv, 79013, UKRAINE,

In recent years statistical methods tools and experimental data analysis information have caused the usage of nonlinear analysis of time series. Development of nonlinear dynamics theory strengthened understanding mostly nonlinear nature of phenomena.
Complex dynamical systems mostly characterized by irregular dynamics of behavior, what manifested as random and deterministic chaotic processes. Observation of such systems and their experimental research displayed by time series – a sequence of discrete random variables are values of the relevant parameters, ordered by time of receipt, that characterize the state of the observation object at certain moments of time.
Most of nonlinear dynamic methods require long and stationary time series except recurrence plot method. Usage of statistical methods for work with time series in some way ensure correct designed model of research phenomena, process or object, however to show it’s dynamic is enough problematic. Statistical methods allow successful solve of form identification and close connection between factor and result indicator, give quantitative and qualitative characteristic of objects.
During dynamic systems research by recurrence analysis you don’t need large amounts of primary data, it is enough to have time series of one measuring experiment. Traditional analytical methods impose restrictions, what are difficult to resolve. Recurrence analysis is a rich field of research as a method and an aspect of usage.
Analysis tools of time series occupy and play an important direct role in receiving of qualitative results. There is a need of a new tool and methodology based on properties of dissipative systems, what would not need special requirements to data and would provide enough results. Main purpose of time series analysis is receiving the information about properties and mechanism of the system, what currently generate series. They are the foundation of such systems modeling.
Nature of recurrence plots gives an opportunity to visualize the functional activity and dynamic of systems based on observation data. It allows you to understand the basic properties and structure, what displayed by observation data. From recurrence plots can be graphically detected hidden regularity and structural changes in the data or seen such regularities through study and research of time series.
Recurrence plots method was introduced for displaying and recognition the trends in time series data of complex dynamic systems. Analysis of such structures can provide understanding of nature processes, what takes a place in all dynamic systems and always have mathematical basis.
Recurrent plots are one of the most interesting modern methods, what received at last decade wide theoretical growth and practical usage.
In order to show the recurrence plots construction method by using MS Excel was generated three time series with types: harmonic, meander’s, triangular in such way that values in time series are repeating as complete cycles. Implementation of this recurrence plots construction method carried out as experimental research part of typing staff.
Key words: nonlinear dynamics, time series, recurrence plots, proximity matrix, MS Excel.

10. Кравець П. О. Ігрова модель самоорганізації мультиагентних систем.


Petro Kravets
Information Systems and Networks Department,
Lviv Polytechnic National University, S. Bandery Str., 12, Lviv, 79013, UKRAINE,

Processes of self-organising of multiagent systems (MAS) in the conditions of the uncertainty, directed on achievement of the co-ordinated work of components of MAS put down to properties of self-learning and adaptation are object of research of this work. Result of it is that the distributed system of elements functions as completely harmonious organism.
Subject of research is the stochastic game model of self-organising of MAS which provides balance of values of payment functions of a command of players and is shown in achievement of the co-ordinated strategies of agents.
The work purpose is construction of game model of self-organising of MAS for support of decision-making in the conditions of uncertainty. This purpose is reached by the decision of such problems: development of mathematical model of multiagent stochastic game; development of a self-learned method and algorithm of the solving of stochastic game; development of software of modelling of stochastic game; the analysis of the received results and working out of recommendations for their practical application.
From the game-theoretic point of view self-organising of MAS is a process of the co-ordinated choice of strategies of the agents, reached at the expense of self-learning during collective optimisation of payment functions in the conditions of uncertainty. Collective decisions are co-ordinated if they satisfy conditions of advantage, equity and stability for all participants of decision-making. For the decision of practical problems criteria of collective balance on Nesh and an optimality on Pareto more often are used.
Game self-organising of MAS in the conditions of uncertainty is an actual scientifically-practical problem which is intensively investigated in the modern scientific literature behind directions of the distributed artificial intellect and decision-making.
The developed game model provides dynamic self-organising of MAS which is shown in rhythmic change of pure strategies of agents which simulates light effects of a colony of insects-glowworms. The locally-caused gathering of the information on strategies of behaviour of the next agents which as a result of training leads to global coordination of strategies of all agents is prominent feature of the considered game self-organising.
Generating of sequences of pure strategies with the necessary properties is provided with the random distribution constructed on dynamic mixed strategies of players. Calculation of the mixed strategies is carried out behind the adaptive recurrent method received on the basis of stochastic approximation of a complementary slackness condition which describes the collective decisions of game satisfying a condition of balance on Nesh.
Convergence of a game method is defined by restriction of values of its parametres according to results of the theory of stochastic approximation and recurrent estimation. Rate of game self-organising of systems depends on quantity of players, quantity of strategies, of noise intensity and parameters of a game method. At appropriate selection of parameters of a game method it is possible to reach close to 1 power order of convergence rate.
Besides values of parameters, self-organising of stochastic game of MAS is defined by balance of penalties for disturbance of spatial coordination and a time rhythm. It is experimentally established that it is possible to provide such balance with influence of white noise on formation of random current losses.
Efficiency of game self-organising of strategies of MAS was studied by means of functions of average losses, factors of coordination and norm of a deviation of the dynamic mixed strategies from optimum values. Decrease of function of average losses and function of a deviation of the mixed strategies, growth of factors of coordination testify about convergence of a game method and system occurrence in a self-organising mode. Repetition of values of characteristics of game in different experiments with unique sequences of random variables confirms reliability of the received results.
Key words: multiagent systems, uncertainty conditions, stochastic game model, self-organising.

11. Левус Є. В., Шалак М. І., Вітоль О. І. Аналіз ефективності аспектно-орієнтованої реалізації для забезпечення супроводу web-системи


Yevheniya Levus, Maksym Shalak, Oksana Vitol
Software Department, Lviv Polytechnic National University,
S. Bandery Str., 12, Lviv, 79013, UKRAINE, E-mail:

Software maintenance is an essential part of the life cycle of software development which starts after transferring it to operation stage. This stage of the life cycle of software development is considered in terms of meeting customer requirements in the finished software product. About 70 % of the project time is spent on the maintenance and modification of the finished project. The cost of the maintenance phase is estimated at 50 % of total life cycle costs.
The current state of software engineering is characterized by increasing complexity of software systems, along with the increasing complexity of the work on the maintenance phase. In this case the difficulty in maintenance is so great that it leads to the examination of the system components as the separate tasks of software development projects. There is a problem of the increasing complexity of software maintenance. It’s urgent to find a solution to software separation of cross-cutting concerns of the system that will reduce the complexity of the software system, reduce its cost, and the complexity of its maintenance.
This article discusses one of the ways to solve cross-cutting concern problem – aspect-oriented programming (AOP). AOP allows us to leave only the basic functionality in the classes, while the cross-cutting concerns are carried into aspects. Used properly, AOP allows us to improve the decomposition of the system and to ensure the reuse of code. As the aspect-oriented paradigm is quite new, it is important to analyze the use of the AOP in the development of systems in terms of software maintenance, confirm the feasibility of its application in certain classes of problems. It is also essential to search tools to confirm the effectiveness of using this approach in each case. For computational experiments a previously created “client-server” architecture system has been taken, which is designed for users with 3 levels of access to the network. In the system has been allocated such cross-cutting concerns as handling exceptions, event logging, opening and closing of the database connection, initialization of closed class attributes. The maintainability index has been used for comparison between an object-oriented and aspect-oriented implementation of a “client-server” system. The results indicate the growth of maintainability index after AOP application, namely from 4 % to 10 % for the modified code blocks. We have received the negative result of the maintainability index for access rights checking. In the specific implementation of the system the most effective using of AOP is working with the database, the least effective – for access rights checking. It is reasonable to introduce the technology at the possibility of modifying at least three classes. Otherwise, even at improving the quality of the code of classes the maintainability index will deteriorate due to the introduction of the additional code of the aspect.
Further research of the impact of AOP on maintenance in applying it to other subsystems, using other metrics and reviewing it in terms of other system requirements such as resource efficiency, is perspective. It’s also promising to research the impact of AOP on software maintenance if it is applied in the implementation of design patterns.
Key words: software maintenance, system module, object-oriented programming, aspect-oriented implementation, class, cross-cutting concern, code metric, maintainability index.

12. Литвин В. В., Гопяк М. Я., Оборська О. В., Вовнянка Р. В. Метод побудови інтелектуальних агентів на основі адаптивних онтологій.


Vasyl Lytvyn1, Mariya Hopyak2, Oksana Oborska3, Roman Vovnyanka4

Information Systems and Networks Department,
Lviv Polytechnic National University, S. Bandery Str., 12, Lviv, 79013, UKRAINE,

Scientific research in the development and implementation of intelligent agent lies in mathematical models development, methods and means of automated information systems development that are targeted to the areas of human activities that require logical reasoning, specific skills and experience, that is, based on knowledge. According to experts of information software systems development, a class of applications, which are necessary for solving such systems, is the most popular. The applications include decision tasks in such domains as disease diagnosing and technical problems; planning and monitoring activities; forecasting and classification of events; processing of natural language texts (quasi-summarization, quasi-annotation) and others.
The main component of intelligent agent is knowledge base, which is formed according to the software on which functioning of the system is oriented. Traditional methods of knowledge engineering (getting knowledge from experts, data mining, machine learning, etc.) are not based on a system of verified and accepted standards, that is why are based on their basis knowledge base and eventually lose their functionality due to low efficiency of their operation. Ontology engineering is used as knowledge engineering standard, as the result of which knowledge base ontology is received. Ontology – a detailed formalization of some given field of knowledge using conceptual scheme. This scheme consists of a hierarchical structure of concepts, relations between them, theorems and restrictions, which are taken in a particular software. Using ontologies as part of knowledge base intelligent agent helps to solve a number of methodological and technological problems that occur during the development of such systems.
As a result of this research the ontology-based method of creating and improving the efficiency of the intelligent agent has been developed. This was achieved through the use of developed earlier software, based on the use of the ontologies in such systems, and adaptation ontologies to the specific problems of the domain. The structure of traditional ontologies was modified by introducing the weights of importance of concepts and relations. This made it possible to adapt the ontology to the specific problems of domain and to the needs of system user through setting up these weights. This model of ontology specifies not only explicit, but implicit knowledge. Mathematical software was developed for functioning of ontology-based intelligent agent, which helped to formalize the decision making process of such a system. Unlike other metrics, this semantic metric based on adaptive ontology takes into account not only their taxonomy of the concepts, but also the causal dependence between them. The mathematical software based on the automated determination of set of properties, according to the values of which the decision support process is implemented. Based on the built models, methods and algorithms, the software of the intelligent agent enables to implement individual components and functional modules of the intelligent agent.
Key words: adaptive ontology, knowledge base, intelligent agent, the weight of importance of concepts and relationships.

13. Литвиненко В. І., Фефелов А. О., Кожухівська О. А. Метод прогнозування гетероскедастич¬них процесів з використанням синтезованих поліноміальних нейронних мереж.


Volodymyr Lytvynenko1, Olga Kozhukhivska2, Andrey Fefelov3
1Department head of Informatics and Computer Science, Kherson National Technical University,
2Educational and Research Institute of Natural Sciences Cherkasy National University
named after Bogdan Khmelnitsky,
3Department of Design, Kherson National Technical University

Decision-making on the development of non-stationary processes is impossible without forecasts of trends and risks that arise in the process. Currently, there are several methods and approaches for solving the problem of mathematical modeling and forecasting processes as general trends and risks that are often nonstationary in nature. In this case measure of risk used for variance and standard deviation (volatility) dependent variable. Volatility describes the degree of variability process in time.
This paper focuses on the problems of modeling and forecasting heteroscedastic time series with the combined use of immune algorithms with polynomial neural networks (PNN).
Such models are useful in decision support systems to predict the value of stocks, exchange rates, inflation, commodity prices and so on.
The main goal is to develop a methodology for improving the quality of forecasts and trends volatility random non-stationary processes through the development of new models.
To achieve this goal in the given article the authors developed methods of development of information technologies through artificial immune systems for forecasting heteroscedastic processes.
The idea of hybridization based on the fact that most paradigms contain the elements of information required by preliminary determination, which in turn is a difficult task.
Below is a description forecasting methodology based on mathematical apparatus polynomial neural networks (PNN) and artificial immune systems (AIS). In this paper task of developing predictive PNN presented as a problem of global optimization.
This means that each individual populations AIS encodes a complete solution that includes both structural and parametric parts.
The terminal alphabet consists of random variables and constants. To account for parametric component, apart from the head and tail of the structure of the individual coefficients introduced area, which is located at the end of the line.
The genotype of the individual represented binary string coding section which codes the relevant character in the alphabet or constant with a given accuracy.
The affinity of individuals is calculated as average square error model on the training data.
The paper presents a method of determining the structure and weight of polynomial neural network using clonal selection algorithm and show its application for solving time series prediction. The feature of the methodology is a way of encoding solutions that allows simultaneously perform structural and parametric identification PNN.
Volatility defines price variability over time. Among the known methods of trade, methods used in their calculations the volatility of market prices are among the most effective.
In this paper, as a measure of volatility, we used the conditional dispersion sampling time series.
The process that has a variable dispersion is called heteroscedastic. For this dispersion forecasting necessary build a model heteroscedastic process.
We have proposed and investigated several schemes building these models with the involvement algorithms of artificial immune systems.
For testing was chosen time series HRS DJ day CLOSE.
Time series contains 607 values steps of 1 day.
Training set was created apart from the test.
Each of the proposed schemes was used in two ways: a) autoforecast by 10 steps forward (forecast horizon) and b) one-step forecast.
The paper describes the authors' methodology which is designed to improve the quality of forecasts of trends and volatility in non-stationary processes.
The technique of creation of hybrid information technologies on the basis of artificial immune systems and polynomial neural networks is developed for forecasting the heteroscedastic of processes.
Key words: heteroscedastic process, volatility, algorithm clonal selection, GMDH, polynomial neural network.

14. Пасічник В. В., Шестакевич Т. В. Інформаційні технології підтримки особистісно-орієнтованого навчання – глобальна освітня тенденція.


Volodymyr Pasichnyk1, Tetiana Shestakevych2
Information Systems and Networks Department,
Lviv Polytechnic National University, S. Bandery Str., 12, Lviv, 79013, UKRAINE,

The article deals with the specifics of learner-centered learning as the most effective way of realization of modern educational concepts, when the educational process is centered on the learner and his possibilities and capabilities. The learning individualization is designed to take into account the psycho-physical development, abilities and special educational requirements of the person in order to ensure favorable conditions for the full development of the personality. For persons with special needs learner-centered inclusive education will contribute to social adaptation, and the use of information technology support of such learning will effectively design the individual educational route.
Achievements of the researchers of inclusive education has received a new use thanks to a modern understanding of education as a continuous process of improvement that lasts throughout life. Thus, an important direction of development of inclusive education coincides with the dominant modern European and world educational trends.
The basic conceptual principles of the individual educational route involve a progressive increase in the autonomy of person in developing the route and learning outcomes. To the formation of components of individual educational route is advisable to involve the person with special needs and his or hers family, teachers and specialists of correctional medicine, higher education specialists and employers. Information and technological support of the processes of learner-centered inclusive education is to be implemented in the form of an appropriate recommendary system. Such a system of intelligent search recommends the most relevant components of the stages of learner-centered education.
The recommender system should serve to identify special needs of the individual, for monitoring of educational trajectory, for creating reports on an educational route at the request of University stuff, employers and other inrtrsted persons, for monitoring the educational, psychological correction and development of a person, as a link of interaction between specialists of inclusive education, for multiple analyses of the accumulated data, to record correction of personal development. The recommender system of learner-centered inclusive education gives the opportunity to facilitate the work of the participants of such process and improve it. The functioning of the recommender systems of personally-oriented inclusive education lies in the analysis of the characteristics of the inclusive person, his mental and physical development and individual learning trajectory to build a quality individual educational route that would meet the principles of learner-centered learning and ensure the efficient achievement of learning objectives.
Formation of personality-oriented route of inclusive education for persons with special needs requires consideration of both the characteristics of its mental and physical development, and available specialized tools and problem-oriented resources, which will allow supporting such education. The set of specialized software, technical, informational, and problem-solving resources support student-oriented inclusive education will be called information and communication tools of inclusive education.
Additional benefits of applying information technology support of learner-oriented inclusive education will be the opportunity to increase the efficiency of the educational process, to adapt quickly to changing conditions, optimize the channels of gathering information, to automate control of learning outcomes to analyze learning outcomes, automate and improve the planning of the educational process.
Key words: lifelong learning, learner-centered learning, individual educational route, individual learning trajectory, inclusive education, information and communication tools of inclusive education, information and technological support, recommender systems.

15. Тавров Д. Ю., Чертов О. Р. Забезпечення групової анонімності як задача пошуку потоку в мережі мінімальної вартості.


Dan Tavrov1, Oleg Chertov2
Applied Mathematics Department,
National Technical University of Ukraine “Kyiv Polytechnic Institute”, UKRAINE,

Nowadays, it has become a common practice to provide public access to primary nonaggregated statistical data, such as population censuses, statistical surveys, and so on. Necessary precautions need to be taken in order to guarantee that sensitive data features are masked, and data anonymity cannot be violated. This problem has been realized for a long time within the field of privacy-preserving data publishing, which is mainly concerned with providing anonymity for individual respondents (individual anonymity). In recent times, another approach has gained popularity called providing group anonymity.
In the case of providing group anonymity in a given dataset, i.e. protecting information about a group of people, it is important to protect intrinsic data features and distributions. To solve this task, it is expedient to modify the dataset in order to mask such sensitive features and distributions. Obviously, such modification leads to introducing a certain amount of distortion into the dataset, and can be accomplished by using different approaches. Therefore, in order to preserve as much data utility as possible, once a particular modification approach is chosen, the goal becomes to introduce minimal amount of distortion under that approach.
In this article, we show that the task of providing group anonymity can be reduced to the well-known minimum cost network flow problem, where there exists a bijective mapping between the network architecture and parameters of data modification necessary to perform in order to provide group anonymity. To solve the classical minimum cost network flow problem, it is sufficient to use developed in the literature algorithms of polynomial and pseudo-polynomial complexity. However, in the case of group anonymity, we can protect the data using various kinds of data modification. Each modification guarantees protection of sensitive data features, but corresponds to different network architectures, and hence to different amounts of data distortion. Since it is not possible to define the best (from the minimizing data distortion point of view) network architecture beforehand, it is expedient to impose certain restrictions on the architecture. Due to subjective nature of such restrictions and inherent uncertainty of statistical data, we propose to formalize them as appropriate fuzzy restrictions. The problem of determining such fuzzy restrictions is an ill-defined one, and often can be solved only by expert evaluation. In this setting, the task of providing group anonymity can be treated as a generalized minimum cost network flow problem. Such a problem is intractable, and can be solved using appropriately tailored heuristics, such as evolutionary algorithms, in particular, memetic algorithms.
To solve the task of providing group anonymity as a generalized minimum cost network flow problem, we develop a novel information technology. This information technology enables us to provide anonymity for groups that can be uniquely identified in a dataset by analyzing values of so called vital attributes. Removing vital attributes from the dataset can be shown to be insufficient to guarantee group data protection, because it is possible to violate anonymity using fuzzy models of given groups in the form of fuzzy inference systems. The proposed information technology takes this problem into account, and can be used to provide anonymity for those groups that can be approximated by fuzzy models. The information technology consists of four stages, each of which in its turn can be divided in several operations. For each stage of the technology, appropriate UML activity diagrams are given, as well as a UML activity diagram encompassing all the stages of the information technology. The structure of the proposed information technology is shown on a corresponding UML component diagram.
The application of the developed information technology is illustrated with the real data based example of providing anonymity of a regional distribution of military personnel working in the state of Florida, the U.S.
Key words: data group anonymity, microfile, information technology.

16. Цмоць І. Г., Скорохода О., Кісь Я. П. Синтез інтегрованих автоматизованих систем управління підприємством.


Ivan Tsmots1, Oleksa Skorokhoda2, Yaroslav Kis3
1,2Automated control systems Department,
Lviv Polytechnic National University, 12 S. Bandera Str., Lviv, 79013, UKRAINE,
3Information Systems and Networks Department,
Lviv Polytechnic National University, S. Bandery Str., 12, Lviv, 79013, UKRAINE,

Currently enterprises of Ukraine work in an environment characterized by increasing competition, increasing the number of partners on the international market, using new production technology, rapid change and instability of the environment. The feature of enterprise management in such conditions is a rapid response to the impact of external factors by taking timely management decisions aimed at improving the efficiency of the company and the quality of its product. You can ensure such control through the development and use of integrated automated control systems (IACS), which provide management of technological, organizational and economic processes in the enterprise.
The current stage of development of the IACS is focused on the widespread use of web-based technologies, databases and DBMS, storage and space data systems, SCADA and intelligent components for analytical processing, to assess the condition of the company, identify potential threats and future opportunities and to make effective management decision on their basis.
The main objectives of modern IACS of company are integrating the functions of technological, organizational and economic processes, creating of unified information space with accurate, complete and current information. The central concept in the IACS is the concept of “integration”. Integration in IACS is defined as a way of organizing individual components into a single system that provides coordinated and focused their joint interaction, which leads to high efficiency of the entire system. Integration in IACS is carried out in the following areas: functional, organizational, informational, algorithmic, technical and economic.
Development of IACS of company enterprises will be based on hierarchical component technology, which envisage separation of the development process into hierarchical levels and types of supply (algorithms, hardware and software). To implement this technology decomposition method is used, which involves splitting of IACS on individual components. At each level of the hierarchy problems of respective complexity are solved, they characterized by units of information and processing algorithms. The complexity of the solved problems divided into three hierarchical levels.
Key words: IACS, system integration, component-hierarchical technology, modularity.

17. Бісікало О. В., Висоцька В. А. Метод опрацювання текстової інформації для автоматичного виявлення значущих ключових слів.


Oleh Bisikalo 1, Victoria Vysotska2
1Institute for Automatics, Electronics and Computer Control Systems,
Vinnytsia National Technical University, 95 Khmelnytske shose St., Vinnytsia, 21021, UKRAINE,
2Information Systems and Networks Department,
Lviv Polytechnic National University, S. Bandery Str., 12, Lviv, 79013, UKRAINE,

Linguistic research in the sphere of morphology, morphonology, structural linguistics has identified different patterns for the word forms description. From the beginning of the development of generated grammars theory linguists have focused not only on the description of the finished word forms, but also the processes of their synthesis. In Ukrainian linguists research in functional areas is fruitful, such as theoretical problems of morphological description, the classification of morpheme and word formation structure of derivatives in Ukrainian language, regularities for affix combinations, word-formative modeling of the modern Ukrainian language in integral dictionarities, the principles of internal word organization, structural organization of denominal verbs and suffix nouns, word-formating motivation problems in the formation of derivatives, the regularity of implementing morphological phenomena in Ukrainian word formation, morphological modifications in the inflection, morphological processes in word formation and adjectives inflection of modern Ukrainian literary language, textual content analysis and processing, etc.
This dynamic approach of modern linguistics in the analysis of morphological language level with focused researcher’s attention on developing morphological rules allows to use effectively the results of theoretical research in practice for the computer linguistic systems construction and textual content processing for various purposes. One of the first attempts to apply generated grammars theory for linguistic modeling belongs to A. Gladky and I. Melchuk. Scientific achievements made by N. Khomsky, A. Gladky, M. Hross, A. Lanten, A. Anisimov, Y. Apresyan, N. Bilhayeva, I. Volkova, T. Rudenko, E. Bolshakova, E. Klyshynsky, D. Lande, A. Noskov, A. Peskova, E. Yahunova, A. Herasymov, B. Martynenko, A. Pentus, M. Pentus, E. Popov, V. Fomichev are applied to develop such textual content processing as information searching systems, machine translation, textual content annotation, morphological, syntactic and semantic analysis of textual content, education didactic system of textual content processing, linguistic support of specialized linguistic software systems, etc.
Linguistic analysis of the content consists of three stages: morphological, syntactic and semantic. The purpose of morphological analysis is to obtain basics (word forms without inflexions) with the values of grammatical categories (for example, part of speech, gender, number, case) for each word form. There are the exact and approximate methods of morphological analysis. In the exact methods dictionaries with the basis of words or word forms are used. In the approximate methods experimentally established links between fixed letter combinations of word forms and their grammatical meaning are used. The usage of word form dictionary in the exact methods simplifies using of morphological analysis. For example, in the Ukrainian language researchers solve the problem of the vowels and consonants alternation by changing the word usage conditions. Then wordbase and grammar attributes research is reduced to search in the dictionary and selection of appropriate values. And after that the morphological analysis is used provided the failure to look up the desired word forms in the dictionary. At a sufficiently complete thematic dictionary the speed of textual content processing is high, but using the volume of required memory is in several times more than using basics dictionary. Morphological analysis with the use of the basics dictionary is based on inflectional analysis and precise selection of the word bases. The main problem here is related to homonymy of the word bases. For debugging check the compatibility of dedicated bases in words and its flexion.
As the basis of approximate methods in morphological analysis determines the grammatical class of words by the end letters and letter combinations. At first allocate stemming from basis words. From ending word sequentially take away by one letter after another and obtained letter combinations are compared with a inflections list of appropriate grammatical class. Upon receipt of the coincidence of final part with words is defined as its basis. In conducting morphological analysis arise ambiguity of grammatical information determination, that disappear after parsing. The task of syntactic analysis is parsing sentences based on the data from the dictionary. At this stage allocate noun, verb, adjective, etc., between which indicate links in the form of dependency tree.
In the given article the main problems of electronic content-commerce system and functional services of commercial content processing are analyzed. The proposed model gives an opportunity to create an instrument of information resources processing in electronic content commerce systems (ECCS) and to implement the subsystem of commercial content formation, management and support. The process of ECCS design and creation as an Internet marketing result is iterative. It contains in its structure a number of stages (from the analysis, design and development of a plan to a prototype construction and experimental tests). The latter process begins with the specifications and layout formation, content template creation, content formation and its subsequent publishing according to the site’s structure. In the initial stages (before setting functional requirements and development process initiation) regular users are involved into the process through poll letters, alternative design and prototyping of varying degrees of readiness. Thus, valuable in formation is collected without much effort, along with both evoking users’ sense of direct involvement in the design process, as well as winning their trust. The paper analyzes sequence methods and models of information resources processing in electronic content-commerce systems. It also allocates the basic laws of the transition from commercial content formation to its implementation. The formal model of ECCS which allows to implement them in phases of the commercial content lifecycle is created. The developed formal model of information resources processing in electronic content-commerce systems allows us to create a generalized typical architecture of ECCS. The generalized typical architecture of ECCS which helps implement the processes of commercial content formation, management and realization is proposed in the paper .
The level of practical utility of known results of statistical analysis of the text is considerably limited by the ambiguous word problem – a key issue of Computational Linguistics. The problem is not solved at the level of single-word analysis – morphological or statistical – that is why to extract knowledge from text more complex linguistic means of syntactic and/or semantic, semantic-syntactic analysis must be used. The development of a hybrid approach that combines linguistic and statistical text analysis tool determines the relevance of the research problem to identify statistical regularities in it syntagmatic and paradigmatic (general – complex) relationships between word-forms/lemmas.
The article is devoted to obtaining new numerical information on profound text characteristics and its application to solve certain problems of Computational Linguistics efficiently. The purpose of the study is to justify theoretical and experimental (using modern tools) approach to evaluate the informativeness of statistical features and options for complex relationships between word-forms/lemmas of the text.
To achieve the goal such problems were well-posed and solved – the main points of the approach were formed and its advantages were determined in the hypothesis form, the formal concept of the subject area was suggested; the statistical and information estimates of the relationship between lemmas were obtained that technologically can be determined using modern language packs, including DKPro Core.
The objective of the research in the article is textual information analysis, and the subject of the research is the methods and models of knowledge extraction from text.
Associative-statistical approach for extracting knowledge from text based on linguistic ties between text lemmas, including certain basic concepts of the approach (for example word-form, lemma, complex relationship, linguistic system and subject area) was further developed. The last concept of subject area which was formally defined as an predicate is the most significant limitation of the proposed approach, in which the hypothesis 1 was formulated and experimentally verified – Pareto distribution is valid not only for words/word-forms/lemmas with a particular subject area, but also for the identified set of relationships between them. The statistical evaluation of text documents collections in the subject area was rationale as additional restrictions of the approach – the expected value of repetitive relationship quantity, confident intervals for the unknown expected value of the statistical population. This allowed doing an information analysis of the approach to the actual problem of determining the keywords in text, including the estimation from above of increased frequency of keywords document.
This article presents the comparative experimental research of methods of relevant keywords finding in Ukrainian-language content. Based approach to automatic determination keywords Porter stemming for Ukrainian language words by distance Lowenstein, take into account the possibility of using a thematic dictionary and removal of blocked words is incorporated. On an experimental basis with 100 scientific publications of technical direction compared to the author's version received numerous statistical characteristics of precision results.
Key words: Porter Stemming, Levenshtein distance, Ukrainian language, keywords, search, thematic dictionary.

18. Кульчицький І. М. Концептуалізація понять “модель” та “моделювання” в наукових дослідженнях


Ihor Kulchytsky
Applied Linguistics Department, Lviv Polytechnic National University, 12 S. Bandera Str., Lviv, 79013, UKRAINE, E-mail:

Among the scientific methods modeling plays a special role. The author refers himself to supporters of the assumption, made at the beginning of the last century, that any our world contemplation either of the most usual, or of the greatest content — is a set of models appropriate or inappropriate to objects of contemplation that create more or less successful way of being.
However, widespread use of models in scientific cognition has led to the fact that their meaning became blurry and very often users put in them their own meaning, different from the generally accepted. Therefore, it is necessary to clarify the notion of model, its main characteristics and types.
These philosophical maxims became the base for generalization:
First maxim. The objective world exists — at least none of the philosophical currents denies that. Among everything in the world we distinguish subject — Homo sapiens — the base, the essence of cognition processes, and social and cultural life.
Second maxim. Cognition is a way of overtaking reality peculiar only to human.
Third maxim. Cognition is inextricably linked to the notion of thinking — conscious mental activity of a person that operates with the substantive content of consciousness.
Forth maxim. Cognition – is a synthesis of human sensory perception of reality with strong-willed actions and desires, intuitive grip of reality and emotions, character traits and human characteristics.
The study of explanations, definitions, interpretations and analysis of the notions “model” and “modeling” in different sources allows to present model as system-representative, an analysis of which serves as a way to get information about the other system.
Under the system we understand the system generating triad “structure; substance; subject”, where: structure – set of objects that are components of the system and the relations between them; substance – something out of which system components consist of; subject – something, that identifies the components of the system and the relations between them as elements of certain collections and can perform the allowed actions upon them according to certain algorithms.
The model has the following characteristics: it is a thing; it is system; it has a purpose; it is always simpler than the original; it is – a means of obtaining new information on the prototype; studying a model has some advantages over direct studying the prototype; human creates it or consciously looking for a it in nature; it has homomorphic image which is isomorphic to homomorphic image of a prototype.
According to substrate of implementation all models can be divided into material, information and mixed. The material (substantive, physical) models reflect the visual properties of objects in the real material object-copy. Information (abstract) models do not have real implementation. They describe the properties and states of things, their relations with the outside world and correspond to current human knowledge about the object of modeling. In mixed models one part of properties of object under modeling is implemented physically, and the other part – in informational manner.
In turn, we divide information models into the mental ones and models-data. Mental models appear in the form of images, which are formed in imagination of a person as a result of his contemplation, thinking, reasoning, etc., and models-data – mental models that are fixed physically in one or another way.
We classify models-data by the subject and means of modeling and method of implementation. By the subject of modeling we divide information models into the structural, meaningful and structural-meaningful. Structural models describe the structure and meaningful – qualitative and quantitative properties of the object of modeling; structural-meaningful – both.
By means of modeling we divide models-data into figurative, symbolic and figurative-symbolic ones. By method of modeling models-data are divided into computer and non-computer ones.
According to area of application models are divided into educational, research, scientific-technical, imitation and playing.
By area of knowledge models are divided according to the view of science that explores the object.
By time factor models are divided into static and dynamic. Static models describe the object of study in particular time, and dynamic – changing of object during certain period.
Key words: model, system, thing, method, technology, science.
19. Лозинська О. В. Процеси інформаційної технології перекладу української жестової мови на основі граматично доповненої онтології.


Olga Lozynska
Information Systems and Networks Department,
Lviv Polytechnic National University, 12 S. Bandera Str., Lviv, 79013, UKRAINE,

Abstract. The problem of developing a machine translation system for sign language has been studied by scientists for a long time. The solution to this problem can provide new communication opportunities for people with hearing impairments. The challenge of translation from Ukrainian sign language (USL) to Ukrainian Spoken language (USpL) refers to tasks of machine translation.
Ukrainian sign language (USL) is a communication system for people with impaired hearing. Today in Ukraine there are about 400,000 people with impaired hearing. There are 59 specialized schools and 20 universities for this category of citizens.
For easy communication with deaf dictionaries, video dictionaies of USL and simulator of USL are developed. However, there are no effective tools of translation. The development of information technology of Ukrainian sign language translation is an urgent task. This information technology will be a great social importance, in particular will enable persons with hearing disabilities actively engage in communication with people who do not speak sign language.
The article summarizes the author's contribution for translation Ukrainian sign language. New methods and tools for Ukrainian sign language translation were developed. These tools can be used for development of machine translation system from spoken to sign language and vice versa to facilitate communication between deaf people and those who do not speak sign language.
The study of known approaches for sign language translation showed that rule-based and ontology-based approaches are the most applicable for Ukrainian sign language translation because of lack of big parallel corpuses for statistical translation. In order to increase the translation quality an alternative approach based on grammatically augmented ontology was studied.
For the development of the information technology of Ukrainian sign language translation the following problems were solved:
1) grammatical analysis of Ukrainian sign language;
2) task decomposition of Ukrainian sign language translation system;
3) building a system of GAO-based rules for Ukrainian sign language translation;
4) development methods of information technology for USL translation using GAO;
5) experimental studies and evaluation of the results.
Information technology of Ukrainian sign language translation consists of the following processes:
– filling grammatically augmented ontology;
– rule-based translation and translation based on grammatically augmented ontology;
– testing translation system.
For filling grammatically augmented ontology of Ukrainian Spoken and Sign Languages domain specific language (DSL) are developed. The DSL named GAODL was created to facilitate uniform editing and processing of grammatically augmented ontologies. These ontologies could be created for specific subject areas and lately merged to obtain upper ontologies. The GAODL language contains means for definition of new grammatical attributes, synsets, relations on synsets, predicates and expressions.
Grammatically augmented ontology for “Education”, “Nature”, “Journey”, “State”, “Family”, “Production”, “Profession”, “Army”, “Theatre”, “Culture”, and “Hospital” subject areas were built. For this purpose, 1200 words were collected from these subject areas and the meaning of each word was verified using the Ukrainian glossary.
The results of the evaluation of the translation system using grammatically augmented ontology compared with a statistical method, rule-based method and method by using a dictionary of concepts “Ukrainain Spoken Language – Ukrainian Sign language” shows the best rusult of translation from Ukrainian spoken language into Ukrainian sign language (93,2 %).
Studies have shown high efficiency of information technology of translation of Ukrainian sign language based on grammatical augmented ontology and the possibility of its use in machine translation.
The main results of the research implemented in Lviv Maria Pokrova Secondary Residential School for Deaf Children.
Key words: Ukrainian sign language, machine translation, synset, ontology, grammatically-augmented ontology, domain specific language.
20. Чирун Л. Б., Кучковський В. В., Висоцька В. А. Особливості методів контент-аналізу текстових масивів даних web-ресурсів у межах регіону.


Liliya Chyrun1, Volodymyr Kuchkovskiy2, Victoria Vysotska3
Information Systems and Networks Department,
Lviv Polytechnic National University, 12 S. Bandera Str., Lviv, 79013, UKRAINE,

The main function of any system processing resources is a Web-information provision users based on their responses to extradition requests. Cash processing system Web-resources necessary data is realized through the main operations – conduct content searches. Content search the Web-system processing resources is based on a request received from the user, the selection he needs content. The need for actual user and operational information in the course of her practice is an information request. Under the influence of the resulting content information needs of users constantly modified, changed and transformed.
Obligatory element of processing resources are Web-content-search subsystem. The structure of content-search subsystem consists of four main modules: Module user registration and entering the request; module processing content; search module content; Module preservation and presentation of content.
Failure to use natural language as the main means of content presentation content retrieval subsystem results in the need for artificial linguistic resources. Information retrieval language (IRL) is a specialized artificial language designed to describe the main content of the content enters the system in order to allow their subsequent search. IRL is created based on natural language, but differs from it compact, a clear lack of grammatical rules and semantic ambiguity.
Models Search text information characterized by four parameters: content and submission of requests; the criterion of substantial compliance; methods of ranking query results; feedback mechanisms to ensure the relevance of the assessment by the user.
Unlike environmental databases in content retrieval subsystem no clear presentation of content and user queries. Users usually begin with inaccurate and incomplete request, and therefore – low efficiency of search, gradually refining his method of iterations. The system supports feedback from the user, allowing you to assess the relevance of the content found on the original request. This approach improves the efficiency of search. To simplify the presentation of feedback using space-vector model search and the user the opportunity simply to note: relevant content or not.
Basic requirements for the processing of Web-system resources: all directories on site moderator checks when added to the site. If the information is not accurate, not accurate, then the moderator removes material. If all fields, the material successfully added to the database, also indexed for search. If not all fields are filled to publish the material, the material just is not added to the database of the site; material transferred to the archive prohibited access to its indexed by search engines and it is removed from the search index.
Article is devoted to development of methods and software processing of information resources in Internet systems. Formulated a new approach to application and implementation of business processes to build such systems. Methods and software for processing content and Web- resources. A well-known method of analysis of textual information is content analysis – standard research methods in the social sciences, the object of which is to analyze the content of text arrays and communication correspondence (comments, forums, e-mail, articles, etc.). The concept of content analysis has no unambiguous definition because systems are based on different approaches are incompatible. The use of content analysis of text information resources to process offers several advantages to simplify the business and solve the problems facing the participants of business processes.
Key words: content, information resource, business-process, content management system, content lifecycle.


21. Андруник В. А., Висоцька В. А., Чирун Л. В. Проект розроблення та впровадження системи електронної контент-комерції.


Vasyl Andrunyk1, Victoria Vysotska2, Lyubomyr Chyrun3
1,2Information Systems and Networks Department, 3Software Department,
Lviv Polytechnic National University, S. Bandery Str., 12, Lviv, 79013, UKRAINE,

In the given article the main problems of electronic content commerce and functional services of commercial content processing are analyzed. The proposed model gives an opportunity to create an instrument of information resources processing in electronic content commerce systems (ECCS) and to implement the subsystem of commercial content formation, management and support. The process of ECCS design and creation as an Internet marketing result is iterative. It contains in its structure a number of stages (from the analysis, design and development of a plan to a prototype construction and experimental tests). The latter process begins with the specifications and layout formation, content template creation, content formation and its subsequent publishing according to the site’s structure. In the initial stages (before setting functional requirements and development process initiation) regular users are involved into the process through poll letters, alternative design and prototyping of varying degrees of readiness. Thus, valuable information is collected without much effort, along with both evoking users’ sense of direct involvement in the design process, as well as winning their trust. The paper analyzes sequence methods and models of information resources processing in electronic content-commerce systems. It also allocates the basic laws of the transition from commercial content formation to its implementation. The formal model of ECCS is created, which allows the implementation in phases of the commercial content lifecycle. The developed formal model of information resources processing in electronic content-commerce systems allows us to create a generalized typical architecture of ECCS. The generalized typical architecture of ECCS is proposed in the paper, which helps implement the processes of commercial content formation, management and realization. Based on the analysis of the basic tasks of electronic content commerce system (ECCS), instrumental means, information technologies and software for constructing of such systems have been analyzed and summarized in the article. ECCS functional diagram with information resources processing subsystems has been developed. The overall architecture, objectives and principles of ECCS realization were described in details. The functional elements of the system were described according to GOST 24.204.80, GOST 24.201-79, GOST 19.201-78, GOST 34.602-89, IEEE Std 1233, 1998 Edition, IEEE Std 830-1998. Software creation tools as well as management and maintenance of the content, and the software realizations of developed ECCS with information resources processing subsystems to set up e-commerce in online newspapers and journals are also presented in the article. The functional logistic method of content processing as the content life cycle stage is proposed in the given article. The method of commercial content processing describes the information resources formation and rubrication processes and simplifies the commercial content management. In the given article the main problems of functional services of commercial content processing are analyzed. The proposed method gives an opportunity to create means of information resources processing and to implement the commercial ECCS.
The purpose of the project is the implementation of standardized testing methods and software processing approbation of information resources in ECCS. The formation of ECCS overall architecture promotes the generalization of ECCS information resources technique through the stages of formation, management and maintenance of commercial content in order to reduce the time while constructing e-business common systems. The implementation of ECCS contributes to the reduction of time while production of own commercial content, analysis of external commercial content from other sources, commercial content lifecycle dynamic analysis, ECCS functioning statistical analysis, statistical analysis of information resources user activities in ECCS, increasing of information resources target audience and extension of ECCS functional capabilities. The purpose of ECCS is formation, management and support of commercial content on principles of information resources processing. ECCS is designed to create common functional requirements and standardized specifications concerning development through processing stages optimization of the information resources in similar systems.
The list of tasks performed by ECCS.
1. Formation of commercial content (collecting data from various sources and their formation, identifying keywords and duplication, digest formation, categorization and content selective distribution, content creation, maintaining content, creation of filtering content rules).
2. Commercial content management (formation/rotation of databases and access to them, subscribing on thematic content, content distribution, individualization of users work, storing of users’ requests and sources, keeping operation statistics; search providing; generation of output forms; information interaction with databases; formation of information resource, formation of comments and content feedbacks, voting on content).
3. Commercial content support (formation of content stream portraits as well as potential/constant users and target audience; identifying content thematic subjects; formation of content relationship tables; calculation of ECCS content and moderators/authors ratings; detection, monitoring, and clustering of new events in the content streams).
ECCS is used for the implementation of e-business in information service field with active usage of the Internet technology benefits. ECCS is designed to provide information services such as online newspaper, online magazine, online edition, online publishing, and online store for selling content etc. It's proposed to use ECCS in order to promote services through publishing houses, newspapers, magazines, news agencies, educational institutions, software development companies or companies which sell content without media. Types of activities where ECCS is applicable: informational (publishing, address and reference, telecommunication, provider), informational and consulting (advertising, marketing, partners reliability testing, distance education) and consulting (legal, economic, medical and other types).
The spheres of application of electronic content commerce systems:
1) for content online sales via online newspapers, online magazines, distance learning, online editions, online publishing, portals containing informative/entertaining/children's content;
2) for content offline sales via such systems as copywriting services, Marketing Services Shop or RSS Subscription Extension;
3) online stores for selling e-Books, video, software, music, movies, pictures, digital art, manuals, articles, certificates, forms, files etc.;
4) for saving of various types of content via cloud storage or cloud computing.
ECCS is intended to solve problems related to the rapidly growth of content in the Internet or in the field of e-business activity as well as widen access to information resources through the Internet, active development of e-business, expanding a set of information products and/or services, increasing demand for information products and/or services, technologies and means creation, and expansion of the scope of information resources processing methods.
The lack of common standardized approach to the overall ECCS design as well as process of information resources elaboration causes a number of issues while developing appropriate systems typical architecture. Due to the lack of common and detailed classification of ECCS, it becomes problematic to define and form the unified methods of information resources processing in these systems. This creates problems for the implementation of the appropriate information resources processing subsystems in ECCS such as the formation, management and maintenance of content.
The existing ECCSs work by unknown algorithms for a wide range of programmers/specialists in the field e-business. While creating of new ECCS the teams of specialists have to re-develop methods/information resource processing tools and content life cycle support. Teaching and learning materials for specialists in the field are missing. The studies concerning patterns and level of impact on the ECCS functioning relative to implement of all or some stages of commercial content life cycle for information resources processing are missing. The analysis of ECCS functioning results aren’t available because of inability to access administrative units of existing ECCS which are already known, as they are commercial projects.
The novelty of project development lies in generalized typical architecture designing as well as methods, tools and technologies for ECCS creating, and implementation of commercial content life cycle stages. Implementation of formation subsystems, management and maintenance of commercial content in ECCS leads to a reduction of production cycle and time saving while distributing commercial content, increasing of potential/constant audience and number of participants in e-business, which promotes its active development and ECCS functionality extension. The developed recommendations concerning ECCS overall typical architecture designing which differ from existing by detailed elaboration of steps and presence of sub-processing information resources that make it possible to effectively maintain content life cycle at the level of systems developer (reducing the time and resources on developing, improving the quality of system operations). There were developed and implemented software tools for creation, management and maintenance of content in order to reach a greater effect of operation at the level of owner (increasing profitability, growth of users interest) and user (comprehensibility, interface simplification, unification of information resources elaboration process and wider choice functional capabilities) of ECCS.
In order to estimate time and financial expenses for ECCS creation, there was created the enlarged plan showing each stage of solving task.
This reduces the amount of time needed for drafting the project and number of project participants as well as clearly regulates the procedure of project implementation through identifying time spent on performing subtask. The amount of resources required for solving individual subtasks, roles and skills of these resources is specified in operations plan. The time schedule of ECCS development allows you to track expenses in the form of Gantt chart, which was developed by MS Project tools.
Key words: information resources, commercial content, content analysis, content monitoring, content search, electronic content commerce system.

22. Голощук Р. О., Думанський Н. О. Особливості впровадження технологій, характерних моделей і методів дистанційного навчання.


Roman Holoshchuk1, Nestor Dumanskyi2
Social Communication and Information Activities Department,
Lviv Polytechnic National University, S. Bandery Str., 12, Lviv, 79013, UKRAINE,

The article presents the research results based on the experience of developing an electronic textbook Fundamentals of the Electronic Circuits Theory. It reveals the concept, structure and principles of the development of the electronic textbook. A detailed procedure of the evaluation of knowledge is outlined. The electronic part of a book also provides a self-assessment tool online.
The methods of distance education on the basis of combined methods of presentation. The structure of distance learning. The review of conceptual representation scheme of educational materials. Done inspection technology modules implementing distance learning in the popular e-learning systems. The analysis of distance learning and review its basic properties.
According to the principles of decentralization and uncertainties provisions systemic approach, based on the induction and synthesis of the previous experience of distance education student, made the gradual formation of the required education using distance learning technologies.
Basic requirements for the effective creation of resources for distance learning. The main tasks designed and developed e-learning system for the organization of distance learning process in children's remote and consulting center in Lviv region network based technologies. Describes the distance learning system that can be used as a component of information and analytical learning management system in secondary and higher educational institutions of Ukraine, complementing in particular related objectives of academic process and technical activities
Describes the principle of intellectual knowledge of remote control (testing) for “Galician tournament for young science”.
The developed e-course book demonstrates a new approach to training and methodological support in teaching basic disciplines. The authors have successfully presented all types of lessons of Fundamentals of the Electronic Circuits Theory course and similar courses which are taught at Lviv Polytechnic National University in a one 330 page coursebook along with a CD. It provides the better results in teaching process as a new theory is learnt with the application of new methods to information access, acquisition of practical skills with the help of both traditional and computer methods, and doing laboratory tasks based on simulation activities. The electronic part of a book also provides a self-assessment tool online.
The concept of the textbook “Fundamentals of the theory of electronic circuits, which jointly developed in the faculty of Lviv and Kyiv Polytechnic, edited by Professor Yu. Bobalo based on considering the following factors: complexity of the learning process, which is the integration of various forms of activities – lectures, practical laboratory, independent, under the volume of tasks allocated credits; importance of self-knowledge and effectiveness of training for laboratory work.
The tutorial demonstrates a new approach to the teaching of basic subjects for study. Electronic parts also allows you to make an independent assessment of knowledge online.
This paper is considered to the question of the intelligent system construction of remote monitoring knowledge (testing) for “Galytsky tournament of young informatics”. The technique and algorithms of conducting the on-line testing in netoriented informational-educational environment for diagnosing participants level of knowledge were represented. The advantages and lacks of existing information technology’s for design of Web-testing are represented.
Key words: distance education, distance learning, e-learning, information and communications technology, electronic textbook, virtual learning environment.

23. Катренко А. В., Пастернак О. Проблема оптимальності в теорії та практиці прийняття рішень.


Anatoly Katrenko1, Olena Pasternak2
Information Systems and Networks Department,
Lviv Polytechnic National University, S. Bandery Str., 12, Lviv, 79013, UKRAINE,

The article reviews system aspects of optimality problem within the framework of the structure of decision-making process, connecting it with DM models and tasks, theoretical and practical aspects of optimal solution peculiarity. It shows that the optimality problem in overall decision-making process is connected with the aim, pursued with decision maker. The quality of target articulation is directly connected with optimality and non-optimality of decisions, which will be made, that is why the target formed by natural language, should reflect the decision maker aim. The descriptive model of common decision-making task, which considers the target aspect by the way of it's reflection in to criteria set and deciding rules synthesis was proposed. Such approach allows to consider the system of decision maker advantages in a process of optimum decision making problem with the necessary degree of adequacy. The researched target partitioning methods on the basis of it's quality formulation, which allows to receive the criteria set, used to asses the level of various aspects of goal achievement. It is is illustrated that decisions, considered as optimal ones and it how they will be coordinated with the decision maker advantages system will depend on the results of this year. The characteristics of decision-making methods from the optimum decisions searching point of view were analyzed, the usage of the method of branches and boundaries as one of the most flexible solutions for the different conditions was justified. The structure of branch and bound method was formalized, the features of its implementation depending on the practical requirements was justified. The general structure of mapping algorithms aim to evaluate optimal solutions and their practical applications was suggested. The usefulness of the method compatible with the objectives tree by analytical hierarchy process reflected in the set PS goals optimality criteria followed by the election of the optimal solution was justified. Reasoning from this fact the authors have developed a two-step procedure PR, which is implemented in solving a number of problems – the formation of the portfolio of IT projects, allocation of resources, investment in IT.
Key words: optimality, decision-making, structure, target, target tree, system-oriented analysis, model, hierarchy, criterion of optimality.

24. Кунанець Н. Е., Пасічник В. В., Федонюк А. А. Соціокомунікаційна інженерія: предмет, об’єкт і методи дослідження


Nataliia Kunanets1, Volodymyr Pasichnyk2, Anatoly Fedonyuk3
1,2Information Systems and Networks Department,
Lviv Polytechnic National University, S. Bandery Str., 12, Lviv, 79013, UKRAINE,
3Department of higher mathematics and informatics,
East Europe national university of the name of Lesia Ukrainian

The authors described a scientific support of the concept “social and communication engineering”, as well as outlined the object, the subject and the research methods of a new type of engineering, which is actively forming and is objectively demanded in today's information society.
Social communications is an object of studying, researching and analysis of many fields of knowledge, but without any research in technologies that can implement these processes and that necessitates the formation of a scientific field, which is relied to the need for analysis and research. The authors proposed to present it under the generalized name – social and communication engineering, signifying it as a combination of methods, means and ways which allow you to design and create qualitative and effective social and communication technologies and systems. Thus, social and communication engineering is a science that explores the processes of construction, design and creation of social and communication technologies and systems.
The formation of social and communicative engineering, as one of the newest types of engineering, objectively determines the presence of the subject, object and research methods. According to the authors, its object is social communications, their components, and its subject is methods, means and ways of design and construction of social and communicative technologies and systems. The social and communicative engineering methods are specific techniques used for the design and construction of social and communicative technologies and systems, as well as general scientific ones, among which the leading place occupies the system analysis. Thus, the term social communication means the complex technologies, implementing the system of social interaction that provides communication processes of social institutions, organized communities and individuals.
There is a need for forming rules, clear principles of social and communicative relationships building in the information society. The social and communicative engineering as the science that studies the processes of designing and creating social and communicative systems is in demand, particularly during the formation of relation system between different parties and political platforms, philosophical systems of various communities through establishing communications, particularly attracts the possibilities of information disseminating through social networks.
The social and communicative engineering forms the rules of correct construction of social groups and sets up its internal connections and rules of relationship building with the outside world. For example, let’s consider the processes of formation of scientific research groups in setting up research within a new scientific project. The researchers are selected and scientific issues are raised. The group members form social and communicative relations between themselves as well as with external social and communicative systems.
The social and communicative engineering investigates the peculiarities of social and communicative system as a whole one, determined not only with the summary properties of its individual elements or subsystems, but with its specific structure, and special system integrative bonds. Any information social institution is considered as an adaptive multifunctional open cultural and civilization system, the purpose of which is to assist the circulation and development of accumulated human knowledge by providing free access to it; to save documented knowledge as a social one; and to form and provide channels for information exchange. Coming into the information and communication systems of the region, each of them implements the function of broadcast information, data and knowledge in the social and cultural dimensions of time and space, giving to the subjects and objects an opportunity to explain the communication process in co-existence.
In the context of concepts of social and communicative engineering, information is suggested to consider as the position of its receiving, storage, transmission, conversion, filtering, and the position of its use in communication processes. Information flows are considered in conjunction with certain structural schemes that have some similarities: the sources and users of information, the volume, the presentation, the direction of transmission, the place and type of storage and others. These structural schemes are used to analyze and minimize data flows and reduce their volume, identify the information duplication and doubling ways of its transmission and others. The concept of information has a high degree of universality and in the general sense the functioning of social and communicative operation system is considered as conversion of the input data in the original one by taking certain decisions in the middle of the system.
Key words: social communication, social institution, social engineering, sociocommunications engineering.