- Рубрики
- Філософія, психологія, педагогіка
- Історія
- Політика, право
- Економіка
- Математика
- Фізика
- Хімія, хімічна технологія
- Біологія, валеологія
- Геодезія, картографія
- Загальнотехнічні науки
- ІТ, комп'ютери
- Автоматика, радіоелектроніка, телекомунікації
- Електроенергетика, електромеханіка
- Приладо-, машинобудування, транспорт
- Будівництво
- Архітектура, містобудування
- Мовознавство
- Художня література
- Мистецтвознавство
- Словники, енциклопедії, довідники
- Журнал "Львівська політехніка"
- Збірники тестових завдань
- Книжкові видання
- Наукова періодика
- Фірмова продукція
№ 814 (2015)
Summary
INFORMATION SYSTEMS AND NETWORKS” BULLETININFORMATION SYSTEMS, NETWORKS AND TECHNOLOGY
1. Артеменко О. І., Пасічник В. В., Єгорова В. В. Інформаційні технології в галузі туризму. Аналіз застосувань та результатів досліджень
INFORMATION TECHNOLOGY IN TOURISM. ANALISIS OF APPLICATIONS AND RESEARCH RESULTS
Olga Artemenko1, Volodymyr Pasichnyk2,
Valery Yegorova3
1Bukovinian University, Department of management automates systems.
2,3Information Systems and Networks Department, Lviv Polytechnic National University, S. Bandery Str., 12, Lviv, 79013, UKRAINE, E-mail: 1o_hapon@meta.ua, 2vpasichnyk@gmail.com, 3changable92@gmail.com, 3ehorova.valeriya@gmail.com
The article introduces information technology and discusses its application in the tourism industry.
The main aim of the article is to study information technology in the sphere of tourism and distinguish issues to be dealt with. The use of information technology in the field of tourism is concentrated in Destination Management Organisations, insurance and transport companies, travel agencies, hotels, cafes and restaurants, as well as providing services to individual tourists and tourist groups. Information and communications technology (ICT) has become a critical element of the tourism industry, forming the ‘info-structure’ and the foundation of the information access and usage. The article is written in the form of the analytical review of new information technology in the sphere of tourism. The author presents the most important characteristics of the research made by leading specialists in e-tourism industry. The analysis of the research impact is made and a number of urgent problems in IT-oriented tourism sector is found.
As a result, the most popular sphere of applying IT in the field of tourism is distinguished, that is the use of information technology in the personalised servicing. A lot of people travel all over the world for different purposes, such as business, relaxation, adventure, and education. They may not have anough time to pre-plan a trip, so they need location-aware information systems to help them make instant decisions during the trip. This is what the majority of scientists is working on now.
But, there still is a number of main issues to be dealt with: good personalized apps for tourists; trip planners for groups of tourists; “smart” rout planner technology; useful and reliable in-trip systems.
Keywords: tourism, information technology, e-tourism, in-trip systems, decision support systems, planning a trip, smart city, mobile information technology.
2. Берко А. Ю., Алєксєєва К. А. Опрацювання неоднорідних даних в iнформацiйних ресурсах Web-систем
HETEROGENEOUS DATA PROCESSING IN WEB-SYSTEMS INFORMATION RESOURCES
Andriy Berko1, Kateryna Alieksieieva2
1General ecology and ecoinformation systems department,
2Social communications and Information Activity Department,
L'viv Polytechnic National University, S. Bandery Str., 12, Lviv, 79013, UKRAINE,
E-mail: 1berkoandriy@i.ua, 2kateryna.alekseyeva@gmail.com
Today a great part of information systems of various orientations is created using modern Internet technologies. The basis of such systems includes agreed and combined data set, which serves as a unified functional web-resource of information system. Usually, this set is focused on its composition data that is varied in content, format, filing and processing method. This resource can be unitary, consolidated, integrated, distributed, and strong or semi structured by way of its formation. One of the important tasks of the web-resource designing process is to provide a coherent representation, storage and interpretation of data at all stages of its processing. One of the recognized methods of achieving such unity of data is its integration.
Principal provisions of the methods of design of information web-resources, based on the distribution of integration process data on syntactic, structural and semantic integration phases, have been developed in this work. This way of information resources design is a further development of the classical approach to integration. It allows creating data structure, methods of filing, processing and final values of their interpretation independently of each other. This ensures the highest level of compliance, integrity and relevance of the final information web-resource. The data integration on the syntax level involves the development of a single system of a data values presentation in the process of resource design, within the resource and on the user interface level, as well as the exchange of this single system with other systems.
The integrated structure of information web-resource design allows the design of a unified heterogeneous data scheme that combines description of relational, poorly structured, active, streaming, and other types of data. The integration of semantics is the final stage of the web-resource information system design aimed to develop agreed rules of thr interpretation, the perception and the use of data that is combined in this resource. Using techniques developed in this paper provides additional opportunities to improve the quality of information web-resources, as well as the development and the implementation of effective CASE-tools for their design.
Key words: web-resource, data integration, distributed data systems, heterogeneous data.
3. Василюк А. С, Басюк Т. М. Інтелектуальний аналіз параметрів унітермів
INTELLIGENT ANALYSIS OF UNITHERMS PARAMETERS
Andrii Vasyliuk 1, Taras Basyuk 2
Information Systems and Networks Department, Lviv Polytechnic National University, S. Bandery Str., 12, Lviv, 79013, UKRAINE, E-mail: 1zoso81@mail.ru, 2btaras@rambler.ru, 2basyuk.ism@gmail.com
The purpose of the publication is to determine the characteristics of intelligent analysis of unitherms. The study will provide the means to implement a comprehensive analysis of parameters of unitherms leading towards the design of the algorithms formulas synthesis systems that implement the system of the algorithms formulas adaptation. To achieve this goal, it is necessary to solve the following main tasks:
— to analyze the well-known analysis systems of unitherms parameters;
— to examine the well-known synthesis systems of algorithms formulas.
The object of the study is the process of calculating the unitherms parameters. The subject of the study is to examine the methods and means of unitherms parameters. The scientific novelty is the study of the features of unitherms parameters. The practical value of the work lies in the formation of parameters used in the design the design of the of algorithms formulas synthesis systems that implement the system of the algorithms formulas adaptation. The authors developed a methodological framework for constructing the system of mining unitherms parameters. The analysis of known means and methods that showed the lack of mechanisms that provide guidelines for the unitherms parameters analysis has been conducted.The unitherms parameters and their components have been investigated. The software to calculate unitherms parameters has been synthesized and minimized.
Keywords: unitherm, algorithms, mathematical model.
4. Висоцька В. А., Чирун Л. В. Формальна модель опрацювання інформаційних ресурсів в системах електронної контент-комерції
FORMAL MODEL OF INFORMATION RESOURCES PROCESSING IN THE ELECTRONIC CONTENT COMMERCE SYSTEMS
Victoria Vysotska1, Lyubomyr Chyrun2
1Information Systems and Networks Department, 2Software Department, Lviv Polytechnic National University, S. Bandery Str., 12, Lviv, 79013, UKRAINE, E-mail: 1Victoria.A.Vysotska@lpnu.ua, 2chyrunlv@mail.ru
The rapid development of the Internet contributes to the increase in the demand for the efficient data of the production / strategic nature and implementation of new forms of information services through modern information technology (IT) of e-commerce. Documented information prepared in accordance with users needs is a commercial content. Today e-commerce is a reality and a promising business process. The Internet is the business environment, and commercial content is a commodity with the highest demand and selling rate. It is also the main object of the electronic content commerce processes. Comercial content can be immediately ordered, paid and got on-line as a commodity. The entire spectrum of commercial content is sold via the Internet — scientific and publicistic articles, music, books, movies, pictures, software etc. Well-known corporations that implement electronic content commerce are Google through Google Play Market, Apple — Apple Store, Amazon — Amazon.com. Most of the decisions and researchs are conducted at the level of specific projects. Electronic content commerce Systems (ECCS) are built on the closed principle as non-recurrent projects. Modern ECCS are focused on the the commercial content realization that is conducted outside the system.The design, development, implementation and maintenance of SECC are impossible without the use of modern methods and information technologies of formation, management and maintenance of commercial content. The development of the technology of information resources processing is important in view of such factors as lack of theoretical grounding of methods for the commercial content flows study and the need for unification of software processing methods of information resources in ECCS. A practical factor of the processing of information resources in ECCS is related to the solution of problems of formation, management and support of growing volumes of commercial content in the Internet, rapid development of e-business, widely spreaded availability of the Internet, the expansion of the set of information products and services, and the increase in the demand for commercial content. Principles and IT of electronic content commerce are used while creating on-line stores (selling of eBooks, Software, video, music, movies, picture), on-line systems (newspapers, magazines, distance education, publishing) and off-line selling of content (copywriting services, Marketing Services Shop, RSS Subscription Extension), cloud storage and cloud computing. The world's leading producers of means of processing of information resources, such as Apple, Google, Intel, Microsoft, Amazon are working in this area.
The theoretical factor of information resources processing in ECCS is connected with the development of IT processing of commercial content. In scientific studies of D. Lande, V. Furasheva, S. Braychevskoho, A. Grigoriev mathematical models of electronic processing of information flows are investigated and developed. G. Zipf proposed an empirical law of distribution of word frequencies in natural language text content for its analysis. In the works of B. Boiko, S. McKeever, A. Rockley models of the life cycle of content are developed. The methodology of content analysis for processing textual data sets was initiated and developed by M. Weber, J. Kaiser, B. Glaser, A. Strauss,
H. Lasswell, O. Holsti, Ivanov, M. Soroka, A. Fedorchuk. In the works of V. Korneev, A.F. Gareev, S.V. Vasyutina, V.V. Reich were proposed methods of intellectual processing of text information. EMC, IBM, Microsoft Alfresco, Open Text, Oracle and SAP have developed specification of Content Management Interoperability Services based on Web-services interface to ensure interoperability of electronic content commerce system management. From the scientific point of view, this segment of IT has not been investigated enough. Each individual project is implemented almost from the very beginning, in fact, based on the personal ideas and solutions. In literature, very few significant theoretical studies, research findings, recommendations for the design of ECCS and processing of information results in such systems are highlighted. It has become of urgent importance to analyze, to generalize and to justify existing approaches to implementation of e-commerce and ECCS building.The actual problem of the creation of technological products complex is based on the theoretical study of methods, models and principles of processing information resources in ECCS, based on the principle of open systems that allow to manage the process of increase in sales of commercial content. The analysis of the factors enables us to infer the existence of the inconsistency between the active development and extension of IT and ECCS on the one hand, and the relatively small amount of research on this subject and their locality on the other. This contradiction raises the problem of containment of innovation development in the segment of electronic content commerce through the creation and introduction of the appropriate new advanced IT that affects negatively the growth of this market. Within this problem there is an urgent task of developing scientifically based methods of processing information resources of electronic content commerce, and building process on the basis of software for the creation, dissemination and sustainability of ECCS. In this paper a study to identify patterns, characteristics and dependencies in the processing of information resources in ECCS was carried out.
Keywords: Web resources, content, content analysis, content monitoring, content search, electronic content commerce systems
5. Дронюк І. М., Федевич О. Ю. Аналіз трафіку комп’ютерної мережі на основі експериментальних даних середовища wireshark
COMPUTER NETWORK TRAFFIC ANALYSIS BASED ON EXPERIMENTAL DATA OF WIRESHARK
Ivanna Droniuk1, Olga Fedevych2
Automated control systems Department, Lviv Polytechnic National University, S. Bandery Str., 12, Lviv, 79013, UKRAINE, E-mail: 1ivanna.droniuk@gmail.com, 2olhafedevych@gmail.com
This article is devoted to the changes in computer networks traffic obtained by using network protocol analyzer Wireshark. The basic features of this environment, its advantages and disadvantages were analyzed and shown. The selected environment recognizes the network protocols structure, allowing to spread network traffic packet, with the followed visualization of the field values of arbitrary protocol hierarchy level. To capture and store packets, pcap library functions have been used. It is this advantage of software Wireshark that has ensured comfort and optimal collection and analysis of data, which are necessary for the investigation. Moreover, Wireshark network protocol analyzer can support many formats of source data, providing the opportunity to view the measurement data files results, captured by other applications and environments.
In order to study the network main functions of the software has been tested. To test the theoretical calculations, experimental studies of traffic networks have been performed. To collect the experimental data of network traffic, the ACS Network Department NU “LP” (February 2015) and the network of the Institute of Theoretical and Applied Computer Science, situated in Polish Academy of Sciences in Gliwice, Poland (May 2014) have been used. The data are visualized as a graph and systematized in the inspection table. Observations have been conducted by the following parameters: the total number of packets, the average number of packets, average packet size and average bit rate packages. The received data has been used to test theoretical models. The studies of traffic are the basis for providing high efficiency of equipment in computer networks. The relative load coefficient of the network is proposed to be used for the network analysis. The important parameters of the network have also been visualized in graphs.
Keywords: traffic, computer network, network protocol analyzer, bit rate.
6. Ковалик М. І., Камінський Р. М. Особливості взаємодії компонентів у мобільній платформі Android
PECULIARITIES OF COMPONENTS INTERACTION IN ANDROID MOBILE PLATFORM
Mykhailo Kovalyk1, Roman Kaminskiy2
Information Systems and Networks Department, Lviv Polytechnic National University, S. Bandery Str., 12, Lviv, 79013, UKRAINE, E-mail: 1mishakovalyk@gmail.com, 2kaminsky.roman@gmail.com
The Android system has currently become quite widespread among mobile platforms. This is due to the openness of the source code, flexibility and reliability of the operating system. Any program in Android is a system of interconnected components like activities, services, content providers and broadcast receivers. It is often necessary to exchange data or execute results between these components for correct cooperation. The interaction between activities and services is most laborious because each of them has its own “life” cycle and can act as an independent component. In the Android system most widespread option is to use threads, asynchronous tasks and services for the background work. Service, as opposed to threads and asynchronous task, is an independent component and can be performed even after finishing of the main program and can operate in the same process with the main program (locally) and in a separate one. For each option there is an appropriate mechanism for intra- or inter-process work. Android platform is relatively new system and is characterized by the lack of research in conventional terms and techniques regarding proper interaction of components in the system. Android system represents a wide range of the variants for interaction between activities and services, depending on the task. The interaction can be both inter-process and intra-process. In general, the need for inter-process interaction appears rarely and only in fairly large projects. Most inter-process interaction program processes by high-level components. But it is also possible to switch to a low-level processing using the RPC (Remote Procedure Call) and the messenger. The RPC is preferred when it is necessary to increase the productivity through processing incoming requests simultaneously. In cases when there is no necessity, it is better and easier to use instant messenger.
Key words: Service, Activity, Intent, BroadcastReceiver, Interprocess Communication.
7. Кравець П. О. Матрична стохастична гра з Q-навчанням
MATRIX STOCHASTIC GAME WITH Q-LEARNING
Petro Kravets
Information Systems and Networks Department, Lviv Polytechnic National University, S. Bandery Str., 12, Lviv, 79013, UKRAINE, E-mail: krpo@i.ua
The processes of the game decision-making in the uncertainty conditions are the object of the research in this article. The subject of this research is the model of the matrix stochastic game in the conditions of the uncertainty of gains matrixes elements. The purpose of the study is the construction of the stochastic game model with Q-learning for the adaptive identification of gains matrixes and their use for the definition of mixed strategies on the Boltzmann distribution.
Stochastic game models are used for problem solving connected with the necessity of decision-making in the conditions of uncertainty – in biology, psychology, sociology, political science, military science, economy, marketing, ecology, information, program and technical systems. Features of such problems are: 1) the distribution or multivariate decision-making environments; 2) the internal stochasticity of environment; 3) the full or partial absence of the aprioristic information on the decision-making environment; 4) the controllability of the environment and the possibility of the distributed realisation of decision-making variants; 5) the definiteness of the vector purpose of decision-making; 6) the discrecity and the finiteness of the set of variants of decision-making; 7) the the stochastic independence of a choice of variants of decisions in the space and in time; 8) the possibility of the reiteration realisations of players’ actions variants on an unlimited interval of time; 9) the distributed locally-caused character of the information formation and gathering for the statistical identification of the decision-making environment; 10) the possibility of application of the distributed game algorithm which provides achievement of the area of the trade-off decision; 11) the realisation of the game algorithm on a real time scale; 12) the possibility of definition of the moments of a stop of game algorithm for its practical application. The matrix stochastic game is define by the set of players, structures of their local interactions, matrixes of random gains distributions, sets of the pure and mixed strategies, decision-making rules. Pure strategies define the sets of decisions variants, and mixed strategies define the conditional probabilities of the pure strategies choice.
Unlike the determined game, in the uncertainty conditions the elements of matrixes of gains are not known to players a priori. The stochastic game participants receive only current reactions of the environment in reply to the realisation of their pure strategies. The pure strategies of players are defined randomly on the basis of probability distributions which are set by the mixed strategies. To find stochastic game solutions in the conditions of uncertainty iterative methods are used. The repetition of the game steps is necessary to gather the information on players’ strategies efficiency while optimising their criterion functions.
The existing recurrent methods of the stochastic game solving are based on the optimum values search of the mixed strategies within the unit simplex. The belonging of the mixed strategies to the unit simplex is provided with the projective operator. Such methods are simple to program, they do not demand an information exchange between players and in the uncertainty conditions provide power-law order of convergence rate. Besides, the stochastic game solving can be executed by other methods, which are based on stochastic identification of the decision-making environment. For example, the method based on the law of estimation of great numbers of gains matrixes, the
Q-learning method. These methods require knowledge of game structure – quantities of players and quantity of their pure strategies. The currently formed gains matrixes are used for the construction of vectors of mixed strategies.
The Q-learning method carries out the elements estimation of gains matrixes on the iterative algorithm, and, in the stochastic formulation, this method provides the adaptive estimation: matrixes elements, which, on average, provide the greatest gain, will be considered and calculated more often.
Based on the results of the performed research, it is claimed that the iterative Q-learning method provides stochastic game solving in the conditions of uncertainty of gains matrixes. The results of this work can be used in the problems of the collective choice of variants of decisions in the conditions of uncertainty.
To practically apply this method, it is necessary to define convergence conditions to one of the states of a collective balance. In the uncertainty conditions, the value parametres that provide the performance of game convergence conditions can be possible to establish theoretically, on the basis of results of the theory of stochastic approximation, or experimentally, during computer modelling.
The parametres ranges change of the game Q-method for convergence maintenance to one of the balance points on Nash have been experimentally established in this work. The increase in the parameter of discounting, the decrease of a dispersion of current gains and the decrease of the rate order of a step change of the game learning, provide the convergence rate increase of a Q-method.
Keywords: stochastic game, uncertainty conditions, Q-learning, Markovian recurrent method.
8. Литвин В. В., Гопяк М. Я. Апроксимація достовірності інформаційних об’єктів онтології предметної області на основі поліноміальних сплайнів
APPROXIMATION OF TRUSTWORTHINESS OF INFORMATION OBJECTS OF SUBJECT AREA ONTOLOGY BASED ON POLYNOMIAL SPLINES
Vasyl Lytvyn1, Mariya Hopyak2
Information Systems and Networks Department, Lviv Polytechnic National University, S. Bandery Str., 12, Lviv, 79013, UKRAINE, E-mail: 1vasyll@ukr.net, 2mariya.hopyak@gmail.com
The evaluation of сreated ontologies quality is one of the vital problems of modern ontological engineering. This part of the process of ontologies development is very important from the practical point. The method of automatically processing the incoming flow of information objects and the method of evaluation of the data trustworthiness in an information system based on the polynomial splines approximation have been proposed in the article. The set of information objects is the basis of the conceptual scheme of information system, thus the trustworthiness of the contents of this set is the problem of vital importance. The conceptual diagram indicates the entities that may exist in the problem area, and the entities that exist or could ever exist. It has been emphasized in the article that trustworthiness determines the fact credibility limit of the ordinary information system user. The characteristics of the source of a fact have been used for evaluation, and also the amount of time of its existence in the information system has been taken into account. After determining the reliability of such objects in time, it has been proposed to approximate it for removing unnecessary objects of ontology, which has the limit of trustworthiness below a certain pre-specified point. This allows increasing the effectiveness of the conceptual scheme of information system which is specified by the ontology of a subject area. The goal of further research will be the task of automatically setting the information objects trustworthiness, depending on the information source.
Keywords: ontology, information system, trustworthiness, approximation, polynomial splines
9. Мельникова Н. І., Вовк О. Б., Дубінець Т. Розроблення інформаційної технології опрацювання персоналізованих медичних даних
INFORMATION TECHNOLOGY DEVELOPMENT OF PERSONALIZED MEDICAL DATA PROCESSING
Natalia Melnykova1, Olena Vovk2, Tadei Dubinets
Information Systems and Networks Department, Lviv Polytechnic National University, S. Bandery Str., 12, Lviv, 79013, UKRAINE, E-mail: 1melnykovanatalia@gmail.com, 2olenavovk@gmail.com
This article is devoted to the information technology development of personalized medical information processing for decision making. A treatment support decision-making system (TDMS) is the means of implementing such a process. Its architecture has been developed. The research results have been analyzed. The main stages of development and TDMS design have been outlined that help perform the decomposition of control processes, describe the relationship between managers’ streams and detail the sequence of using the methods and data processing in the system. After the analysis of existing automated systems for the submission and processing of medical data, it has been revealed that they do not fully allow the requirements of mobility systems and the requirements for the problems solution. All of them require complex logic conclusions in the uncertainty of high degree, incompleteness and inconsistency of input data. The solution of this problem is possible due to the intellectualization of these systems based on new information technology and in particular in the application of the theory and practice of database management, web-technologies, data warehousing concept as aggregate information resource that contains consolidated information from the whole domain and uses for decision support, analysis and data mining.
Modern information technologies can significantly improve the administrative processes of medical institutions and increase the quality of medical services. The effective information strategy helps reduce costs of service and improve their efficiency, so hospitals can significantly increase the level of their work. Integrated environment helps medical staff to get reliable and secure access to patients' data. However, the presentation of the information is not only best for the understanding of the patient's state, but also provides an opportunity for the patient to understand medical data.
The defined organization of system work to support treatment decision making describes the process of finding a personalized treatment schemes in details. Priority requirements for data management have been specified, that determine the practical expediency of TDMS. The present mode of the treatment decision-making system functioning and principal stages of the system functioning allow speeding up the processing of medical information. As a result, the decision-making system produces the answer.
The operation principle of TDMS involves the use of online resources, increases the level mobility of the system. It allows the users to protect and to update the information base. These characteristics of TDMS are compatible with the determinant demands when improving the quality of healthcare.
Keywords: support decision making system, treatment expert system, architecture of information technology, design expert systems.
10. Нич Л. Я., Камінський Р. М. Визначення показника Герста за допомогою фрактальної розмірності, обчисленої клітинковим методом на прикладі коротких часових рядів
HURST EXPONENT EVALUATED VIA CALCULATED BY BOX-COUNTING METHOD ON SHORT TIME SERIES EXAMPLE FRACTAL DIMENSION
Lilya Nych1, Roman Kaminsky2
Information Systems and Networks Department, Lviv Polytechnic National University, S. Bandery Str., 12, Lviv, 79013, UKRAINE, E-mail: 1nychliliya@mail.ru, 2kaminsky.roman@gmail.com
Information technology researchers have recently widely used methods of nonlinear dynamics. One of such methods is the fractal analysis of time series. To conduct such analysis, it is required to determine the value of the fractal dimension and the Hurst exponent, which are connected. Their relation is that the sum of the fractal dimension and the Hearst parameter equals two. The value of the fractal dimension is usually determined through the empirical Hurst exponent. However, the adequacy of this indicator is not great due to its empirical methods. The fractal dimension is defined via the box-counting method, that is, by counting the number of cells that contain at least one point of time series. It is known that the box-counting method in its general formulation offers a significant redundancy for the counted number of cells.
The aim of this work is to improve the accuracy of the fractal dimension values by using box-counting method and as a result, obtain an accurate assessment of the Hurst exponent value.
The essence of the box-counting modification of the method is that the number of cells is expressed in fractional number. In other words, for each vertical column of cells the scope levels are determined. Then, it is divided by the amount of party cells and the findings are summarized.
Fractal analysis has been performed for ten time series. The Hearst index has been determined by the following three methods: the one directly developed by Hurst, the one modified by Peters and the one improved by box-counting through the fractal dimension. The results obtained using Ms Excel have showed that the algorithm for determining the fractal dimension is accurate and the Hearst indicator value should be better determined through fractal dimension that excludes any empiricism.
Key words: fractal dimension, Hearst exponent, box counting algorithm, R/S analysis, fractal analysis.
11. Панова О., Обельовська К. Аналіз впливу адаптивної зміни числа категорій доступу схеми EDCA на часові характеристики безпровідної мережі
THE ANALYSIS OF IMPACT ON WIRELESS NETWORKS DELAY PERFORMANCE OF EDCA ACCESS CATEGORIES NUMBER ADJUSTMENT
Olga Panova, Kvitoslava Obelovska
Lviv Polytechnic National University, S. Bandery Str., 12/806, Lviv, 79013, UKRAINE, E-mail: obelyovska@gmail.com
A network performance depends on a number of factors. A considerable contribution to the network performance is being made by an access scheme to the physical environment. The access scheme to the physical environment is considered on the MAC-layer (Medium Access Control) of the OSI (Open Systems Interconnection) model of communication. The bases access to the physical environment is provided by using the Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) scheme in the most popular wireless (WiFi) networks. A part of network throughput is used to transmit the overhead information. As a result, a network goodput is being reduced. Thus, it is very essential to improve the efficiency of the access schemes of the wireless networks to the physical environment.
In most of the works done on improving of the access schemes to the wireless environment one or several parameters of the collision avoidance algorithm are investigated. But the efficiency of the introduced optimization methods has a significant limitation due to network condition. Moreover, some of the proposed methods do not consider providing quality of service for traffic of different priority.
In this paper an adaptive ACs (Access Categories) number adjusting algorithm for the EDCA (Enhanced Distributed Channel Access) scheme is proposed. By introducing a collecting data buffer and switching mechanism for AC queues we show that the total performance of the wireless network can be enhanced especially under the highly loaded network conditions. Using developed simulator wireless network performance under different conditions (size, load) has been investigated.
The simulation results demonstrate that our proposed adaptive ACs number adjusting algorithm for the EDCA scheme outperforms significantly the 802.11 specification and may reduce the average frame delay up to 30-40% under the tough network conditions (large network size and high network load).
Keywords: wireless networks, IEEE 802.11, medium access control, EDCA, adaptive ACs number adjusting.
12. Пасічник В. В., Шестакевич Т. В. Моделювання інформаційно-технологічного супроводу інклюзивного навчання осіб з особливими потребами
THE MODELING OF INFORMATION AND TECHNOLOGICAL SUPPORT OF INCLUSIVE EDUCATION FOR PERSONS WITH SPECIAL NEEDS
Volodymyr Pasichnyk1, Tetiana Shestakevych2
Information Systems and Networks Department, Lviv Polytechnic National University, S. Bandery Str., 12, Lviv, 79013, UKRAINE, E-mail: 1vpasichnyk@gmail.com, 2tshestakevych@gmail.com
Inclusive education is one of the promising forms of education for persons with special needs. The relevant task is to develop mathematical software to solve scientific and applied problems of support of inclusive educational process for persons with special needs on the basis of modern information technologies. A special requirement for the creation of such a formal model is the comprehensive analysis of the functional stages of the educational process for a particular person and inclusive forms of education in general. To build a formal model of IT-support system of inclusive education for persons with special needs the generative grammars have been used. The application of such formalism allowed the full reflection of the inclusive education key features in the model. A mathematical representation of the grammar production rules is a convenient way to identify dependencies, which are sequentially formed in inclusive education for persons with special needs. The use of alphabets of nonterminal and terminal symbols enables logical division of the transformations that are taking place in the educational process, and the results of such transformations. The possibility of taking the context into account in generative grammars gives the opportunity to realize the defining features of the education for person with special needs, namely the implementation of the further steps of an educational process depending on the results achieved at the previous stages. In this formal model of the IT-support system of the inclusive education for persons with special needs a significant number of critical factors, important for personification of learning processes, is accounted. The constructed formal model allows us to develop a coherent system of information and technological support of the educational stages in the conditions of inclusion for the person with special needs, i.e. to significantly improve the overall support of the processes of teaching, education and social adaptation and integration throughout the life for such categories of persons in the modern information society.
Key words: inclusive education, IT support, the education for persons with special needs, context-sensitive grammar.
13. Різник В. В. Оптимальні коди на векторних комбінаторних конфігураціях
OPTIMAL CODES ON VECTOR COMBINATORIAL CONFIGURATIONS
Volodymyr Riznyk
Automated Control Systems Department, Lviv Polytechnic National University, S. Bandery Str., 12, Lviv, 79013, UKRAINE, E-mail: rvv@polynet.lviv.ua
The concept of coding systems optimizations based on vector combinatorial configurations, namely the Ideal Vector Rings models, is regarded in this paper. Moreover, the optimization has been embedded in the underlying combinatorial models. The favorable qualities of the Ideal Vector Rings provide breakthrough opportunities to apply them to numerous branches of science and advanced technology, with direct applications to vector data coding and information technology, signal processing and telecommunications, and other engineering areas. This paper belongs to the field of computer science and is aimed at improving the qualitative indices of multidimensional vector data information technologies and computer systems with the respect to the transmission speed of vector data with automatic error correction, and data security using a variety of the multidimensional combinatorial configuration and the finite cyclic group theory. Some problems of computer engineering and information technologies which deal with profitable use of mathematical methods for optimization of coding systems based on the two-and multidimensional Ideal Ring Bundles (tD-IRB)s are regarded. A special attention has been paid to interpretations of multidimensional Ideal Ring Bundles as vector cyclic groups and its numerous isomorphic transformations using the theoretical relation of the mathematical models with reference to the well-known cyclic difference sets theory. It has been shown the possibility for the high performance systems design of the optimal monolithic vector coding systems, which provide a vector data coding in the torus frame of reference using the combinatorial optimization. An example of the possibility of optimizing two-dimensional vector code systems based on the 2D-IRBs has been presented. It has been shown that the proposed techniques provide the design of high performance vector data coding and control systems using the combinatorial optimization. Definitions of the Ring Monolithic Vector Codes have been given, such as Numerical Optimum Ring Code, Two-dimensional Optimum Ring Code and Multidimensional Optimum Ring Code. Remarkable properties of underlying models are very useful, taking into account the opportunity to generalize these methods and results to improve and optimize a larger class of information engineering and computer systems. The optimization has been embedded in the underlying combinatorial configurations. These design techniques make it possible to configure optimal two- and multidimensional vector coding systems using fewer code combinations in the system, while maintaining or improving on code size and other significant operating characteristics using high speed corrected coding possibility of the system.
Keywords: vector data coding, combinatorial configuration, torus cyclic group, Ideal Ring Bundle, optimization, security, transmission speed.
14. Савчук Т. О., Козачук А. В. Прогнозування кількості мережевих запитів до хмарного застосунку
FORECASTING NETWORK REQUESTS NUMBERS TO CLOUD APPLICATION
Tamara Savchuk, Andriy Kozachuk
Vinnitsa National Technical University, UKRAINE, E-mail: savchtam@gmail.com
A variable intensity of the cloud applications usage raises the problem of the computing resources allocation optimization for the maintenance of cloud application. This problem is often solved by using reactive scaling, i.e., the increase or the decrease of computing facilities when a certain threshold of available system resources is reached. The process of a change in computing is quite long and cloud application users may experience delays or failures till it is finished. To avoid this drawback a proactive scaling based on the forecast intensity of a cloud application can be used [1]. This forecast can be constructed using the methods of time series that are applied to the network requests time series received by the cloud application. The information about the schedule of events related to the cloud application can be used to improve the accuracy of prediction by using various forecasting methods, depending on the state in which a cloud-based application is. As a result of the conducted research, the accuracy of forecasting techniques for time series of network requests to a cloud application is estimated in different working modes of a cloud application. It has been found that the prediction in the mode of pull-growing traffic, the most accurate shows the trend-based exponential smoothing method. When forecasting the number of network requests between events, the most accurate method is the seasonal ARIMA with daily seasonality.
The comparison of the prediction accuracy of the number of network requests to the cloud application using these methods separately and combined has been calculated. It is shown by the results of the conducted research that the accuracy of forecasting is higher for combined method comparing to forecasting accuracy of other methods by an average of 7%.
Key words: forecasting of time series, cloud computing, ARIMA, exponential smoothing
15. Струбицький Р. П. Самоподібна модель завантаженості хмаркових сховищ даних
SELF-SIMILAR MODEL OF CLOUD DATA WAREHOUSE LOAD
Rostyslav Strubytskyi
Information Systems and Networks Department, Lviv Polytechnic National University, S. Bandery Str., 12, Lviv, 79013, UKRAINE, E-mail: rolleks@gmail.com
In view of current trends in telecommunications and cloud data warehousing, the topical task is to build a converged multi-service network. Such a network ought to provide an unlimited range of services to provide flexibility for the management and creation of new services. The development of the network equipment and transport protocols ought to be based on appropriate mathematical models and parameters of traffic simulation tools of network processes. One of the most relevant problems of the cloud data warehouses study and its temporal probability characteristics is the consideration of the features of the network traffic. The aim of the study is to examine different models of network traffic and analyze the most promising models for cloud data warehouses, which take into account the properties of self-similar traffic as a time series.
The dynamic characteristics of incoming and outgoing traffic, as well as the distribution of hardware capacity of cloud server have been worked out on the basis of a real cloud data warehouse.
Although the long-term dependence is causing a sharply pronounced fluctuation process, it gives the opportunity to discuss some predictability within narrow limits of time. From the point of view of the theory of queues, an important consequence of the correlation flow is the unacceptability of parameter estimation queues that are based on forecasts of identical and independent distribution of intervals in the input stream.
In order to confirm the existence of self-similarity properties of the different data streams of a multiservice network and, thus, cloud storage server workload, the measuring of some characteristics of different network traffic types has been conducted.
The cloud data storage has been used for studying purposes. The physical server is divided into multiple virtual areas using the Solaris operating system, each of which is used to perform a number of tasks. Most of the traffic is transmitted by HTTP/HTTPS, FTP/FTPS and SFTP protocols. For further processing the following parameters of a data warehouse have been used: the incoming / outgoing traffic, the number of running processes, the load and idle processors, the average load on the processor, the amount of cache.
The received data is consolidated during the week, thus, allowing the assumption that they represent the real picture of a cloud data storage usage. Observing the time dependence of the traffic, the presence of a periodic component in it that leads to a large value of the Hurst parameter has been noticed. The proximity of Hurst parameter to 1 allows performing more accurate predictions. As a result of this study the cloud data warehouse traffic frequency, which has daily character, has been found.The intensity of storage load mainly depends on the incoming and outgoing traffic. Sufficiently high value of the Hurst parameter indicates potential possibility of modelling and forecasting workload cloud data storage in the long term.
Keywords: Cloud data storage, self-similarity, modelling, Hurst parameter, network traffic
16. Турковська О. В. Подання лісокористування у комп’ютерних моделях еколого-економічних систем
SIMULATION OF FOREST MANAGEMENT IN ENVIRONMENTAL AND ECONOMIC COMPUTER MODELS
Olga Turkovska
International Information Department, Lviv Polytechnic National University, 12 S. Bandera Str., Lviv, 79013, Ukraine
International Institute ofor Applied Systems Analysis, Schlossplatz 1, Laxenburg, 2361, Austria,
E-mail: turkovska@gmail.com
The aim of this study is to make an overview on existing environmental computer models and to analyze representation of forest management in order to find the most proper way to deal with forest management regimes. Nowadays, a lot of models were developed in this field, there is analysis of few of them and they were used as an evaluation tool for a large number of international projects and reports concerning policies on climate changehe European Commission projects and the Centre for International Foresty Research. In particular, in Eliasch Review, the Netherlands Environmental Assessment Agency, the World Wildlife Fund, the Centre for International Forestry Research project.
The selected models simulate forest management with different levels of detail. For instance, the models where the forest management is just a small part of big simulation system, like GLOBIOM. As well as the models which consider forest management as main modeling object like EFISCEN or GTM. The common idea for those models is tracking the changes of modeling object under realization of different socio-economic scenarios. It is possible to evaluate the most proper policies for achieving the target by applying a number of socio-economic scenarios. Finally four environmental computer models were analyzed:
EFISCEN (European Forest Information Scenario Model) – the matrix model, which simulates development of forest resources at scales from provincial to European level;
GTM (Global Timber Model) – the global model of dynamic optimization which maximizes net present value of net surplus of global timber;
G4M Global Forest Model – the global geographically explicit model which predicts afforestation and deforestation rates, forest management regimes and carbon dioxide emissions;
GLOBIOM (Global Biosphere Management Model) – the global recursive partial equilibrium model which simulates the competition for land among different land-use types driven by price and productivity changes.
There is not any model which perfectly and fully describes the complexity of forest system, its linkages and dependencies with the other natural and economical systems. Every model has its advantages and disadvantages of forest management simulation. Therefore, application of several models for policy analysis is the best way to increase reliability of the assessment.
Keywords – computer model, algorithm of forest management, land-use change, model structure
17. Цегелик Г. Г., Краснюк Р. П. Задача оптимального розподілу завдань між комп’ютерами мережі
THE PROBLEM OF OPTIMAL DISTRIBUTION OF TASKS BETWEEN COMPUTERS ON THE NETWORK
Grigoriy Tsegelyk, Roman Krasniuk
The Department of Mathematical Modelling of Socio-Economic Processes, Ivan Franko National University of Lviv , 1 Universytetska St., L’viv, 79000, UKRAINE, E-mail: krasniuk@ukr.net
The problem of growing demand for computing resources and desire to reduce equipment costs can be solved through the introduction of GRID-technology as the core distributed computing technology in the process of building computer systems. The use of GRID technology allows to build management system using distributed computing resources. In this situation, it does not matter for users where a particular network node runs its task; it just consumes a certain amount of virtual processor capacity available in the network.
From a computational point of view the GRID-system can resolve two classes of task: problems, which can perform parallel computing process or flow problems for which parallelization are impossible. Additionally, we can have the variant of simultaneous GRID-system service of two types of problems identified above. Therefore, on the management of GRID-system we face the problem of computing organization that provides their optimum mode. As a result, the construction and research of optimization models of related to the computer network functioning is relevant and important issue which is seen in the article. The article considers distribution problems of equally complicated tasks in networked computers by providing information about task management solution on each computer. As an optimality criterion we select amount of time that should be minimal to solve problems.
In the article we made a mathematical formulation of the problem, offered an efficient numerical algorithm based on the use of dynamic programming techniques. To demonstrate an algorithm we considered test case and provided stages of problem solving with computing algorithm.
The analysis of the results in the paper suggests on the effectiveness of the proposed approach to solving practical problems of GRID-control systems. The formulated algorithm gives a precise solution of the problem and is easy to implement in object-oriented programming language. Therefore, further study of this issue is to integrate the proposed computational algorithm in the software that manages GRID-system resources.
Keywords – optimal distribution, computer network, dynamic programming methods, computational algorithms.
18. Цмоць І. Г., Антонів В. Я. Апаратні засоби сортування даних методом злиття в реальному часі
HARDWARE FOR DATA SORTING BY METHOD OF MERGING IN REAL TIME
Ivan Tsmots1, Volodymyr Antoniv2
Automated control systems Department, Lviv Polytechnic National University, 12 S. Bandera Str., Lviv, 79013, UKRAINE, E-mail: 1ivan.tsmots@gmail.com, 2volodya.antoniv@gmail.com
The requirements to the data sorting algorithms that must be well structured, recursive and locally dependent, oriented towards implementation on the plural of interrelated PE and provide the determined data migration are formed. Developing of high-efficiency parallel structures was offered to sort intensive data flows in real-time by the method of merging based on computer-integrated approach, which engulfs methods, algorithms, structures and VLSI-technology, and consider particular features of application. To ensure high efficiency of equipment usage while developing VLSI-structures of sorting real-time data sets it was suggested using the following principles: parallelization process of data sorting; hardware specialization and adaptation to the structure of algorithms sorting and data revenues intensity; consistency of sorting intensity with data revenues intensity. Based on the space-time mapping algorithms, the consistent flow graphs for data sorting in real time were developed. New algorithms and device structures for parallel and parallel-flow intensive data sorting were also developed by the method of merge sorting in real time, due to using hybrid algorithms, channel number changes and bit data revenues, the coordination of data flow intensity with data sorting capacity is accomplished providing more efficient use of equipment. It is shown that in matrix device for coordination of data flow intensity with data sorting intensity is achieved by changing the data bit channels, bit-processing elements and using hybrid algorithms that are based on combining methods of merge and counting sort. In streaming devices for coordination of data flow intensity with data sorting intensity is achieved by changing the number of channels of double-tracked merge.
Keywords: data sorting, merge method, hardware, flow graph, real time.
19. Шаховська Н. Б., Болюбаш Ю. Я. Модель Великих даних “сутність-характеристика”
BIG DATA MODEL “ENTITY-CHARACTERISTIC”
Natalya Shakhovska, Yurij Bolubash
Information Systems and Networks Department, Lviv Polytechnic National University, 12 S. Bandera Str., Lviv, 79013, UKRAINE, E-mail: natalya233@gmail.com
Big data is also a term used to identify data sets that we can not cope with existing methodologies and software tools because of their large size and complexity. Many researchers try to develop methods and software tools for data transfer or extraction of Big data information granules.
The peculiarities of Big data are:
• work with unstructured and structured information;
• fast data processing leads to the fact that traditional query languages are ineffective for working with Big data.
The purpose of the paper is a formal description of different data models, operation and carrier selection and methods of sharing. The parallel data processing means (NoSQL, algorithms MapReduce, Hadoop) belongs to this class. The defining characteristic for Big data is the amount (volume, in terms of volume size), speed (velocity in terms of both growth rate and the need for high-speed processing and the results ), diversity (variety, in terms of the possibility of simultaneous processing of different types of structured and semi-structured data). The main point of Nosql is following: non-relational data model, distribution, open source, good horizontal scalability.
There iare some defined objects and associations of model. The main data characteristics are in NoSQL. The great informational structure is constructed data. All this became the basis for continued research and helped to focus on the problem of heterogeneous processing data without their prior integration.
Keywords: Big data, NoSQL, document-oriennted database, BigTable
20. Швороб І. Б. Порівняльний аналіз методів синтаксичного розбору текстів
THE COMPARATIVE ANALYSIS OF PARSING METHODS
Iryna Shvorob
Information Systems and Networks Department, Lviv Polytechnic National University, 12 S. Bandera Str., Lviv, 79013, UKRAINE, E-mail: irka.shvorob@gmail.com
Nowadays humankind increasingly uses information in digital form. And often there is a need for a specific analysis of this information and its structuring. To this end, parse (parsing) in linguistics and computer science is used – the process of comparing the linear sequence of tokens (words) of natural or formal language or formal language with its formal grammar. The result usually is a parse tree (syntax tree). In other words, parsing is a process of analysis or text parsing into components using special software. Parser is a program that analyzes text documents, retains data analysis in its database and then produces them when searching for relevant and current data. Parser can detect a large number of useful information and process it depending on the tasks. Parsing usage can quickly handle large amounts of data, since manually it is almost impossible to accomplish this. In general, parsing is an effective solution to automate the collection and changing information. It is found out that parser must provide a quick detour of large amounts of information; competently and carefully separate technical information from the non-technical; accurately choose the desired information and discard unnecessary information; effectively serve and store the data in the desired format.
The aim of this research is to study several algorithms for parsing and analyzing their work.
Analytical review of parsing algorithms was implemented. This article contains information about parsing classification methods. Four types of parsing methods on the basis of classification are defined. They are: classification by the method of parsing, classification by analyzing the sequence, classification by watching forward and classification by the use of repetitions. All these classifications are explained and described in the article. It is established that any parser consists of three parts which are responsible for three separate processes of parsing: getting text in its original form, the extracting and data conversion, the result generating. The following algorithms were chosen for analysis: Earley parser, LL parser, Recursive descent parser, CKY parser, LALR parser and Pratt parser. For those algorithms software implementation was made.
The sentence: “Parsing is the process of analysing a string of symbols” was chosen as a test case validation of the algorithm. For the selected sentence parsing tree was constructed.
The algorithms were compared according to two criterias: performance of grammar study to presented input text. Each algorithm was implemented on the principle illustrated in a context diagram. The comparing efficiency of algorithms above is presented in the table. In this research a complete grammatical analysis is not build, but texts with known structure are parsing faster. It is much easier to implement parsing in semi-structured text which are divided into blocks. They can be organized by using metafeatures which are created by using already extracted information. Such features are extracted from the input document and are used for identifying the information. This approach can be used for all sorts of information.
It is worth noting that some algorithms run faster than others. But this time not all algorithms can work with all grammars. For example, CKY parser is faster than Earley parser, but CKY algorithm requires the grammar to be in CNF and Earley algorithm works for any grammar. The solution to this problem may be to improve the existing algorithms or to create new algorithms by their combination.
Keywords ― parsing, context-free grammar, text analysis.
COMPUTER AND MATHEMATICAL LINGUISTICS
21. Берко А. Ю., Висоцька В. А., Чирун Л. В. Лінгвістичний аналіз текстового комерційного контенту
LINGUISTIC ANALYSIS OF TEXTUAL COMMERCIAL CONTENT
Andriy Berko1, Victoria Vysotska2, Lyubomyr Chyrun3
1General Ecology and Ecoinformation Systems Department, 2Information Systems and Networks Department, Software Department3, Lviv Polytechnic National University, 12 S. Bandera Str., Lviv, 79013, UKRAINE, E-mail: 1berkoandriy@i.ua, 2Victoria.A.Vysotska@lpnu.ua, 3chyrunlv@mail.ru
Linguistic research in the sphere of morphology, morphonology, structural linguistics has identified different patterns for the word forms description. From the beginning of the development of generated grammars theory linguists have focused not only on the description of the finished word forms, but also the processes of their synthesis. In Ukrainian linguists research in functional areas is fruitful, such as theoretical problems of morphological description, the classification of morpheme and word formation structure of derivatives in Ukrainian language, regularities for affix combinations, word-formative modeling of the modern Ukrainian language in integral dictionarities, the principles of internal word organization, structural organization of denominal verbs and suffix nouns, word-formating motivation problems in the formation of derivatives, the regularity of implementing morphological phenomena in Ukrainian word formation, morphological modifications in the inflection, morphological processes in word formation and adjectives inflection of modern Ukrainian literary language, textual content analysis and processing, etc.
This dynamic approach of modern linguistics in the analysis of morphological language level with focused researcher’s attention on developing morphological rules allows to use effectively the results of theoretical research in practice for the computer linguistic systems construction and textual content processing for various purposes. One of the first attempts to apply generated grammars theory for linguistic modeling belongs to A. Gladky and I. Melchuk. Scientific achievements made by N. Khomsky, A. Gladky, M. Hross, A. Lanten, A. Anisimov, Y. Apresyan, N. Bilhayeva, I. Volkova, T. Rudenko, E. Bolshakova, E. Klyshynsky, D. Lande, A. Noskov, A. Peskova, E. Yahunova, A. Herasymov, B. Martynenko, A. Pentus, M. Pentus, E. Popov, V. Fomichev are applied to develop such textual content processing as information searching systems, machine translation, textual content annotation, morphological, syntactic and semantic analysis of textual content, education didactic system of textual content processing, linguistic support of specialized linguistic software systems, etc.
Linguistic analysis of the content consists of three stages: morphological, syntactic and semantic. The purpose of morphological analysis is to obtain basics (word forms without inflexions) with the values of grammatical categories (for example, part of speech, gender, number, case) for each word form. There are the exact and approximate methods of morphological analysis. In the exact methods dictionaries with the basis of words or word forms are used. In the approximate methods experimentally established links between fixed letter combinations of word forms and their grammatical meaning are used. The usage of word form dictionary in the exact methods simplifies using of morphological analysis. For example, in the Ukrainian language researchers solve the problem of the vowels and consonants alternation by changing the word usage conditions. Then wordbase and grammar attributes research is reduced to search in the dictionary and selection of appropriate values. And after that the morphological analysis is used provided the failure to look up the desired word forms in the dictionary. At a sufficiently complete thematic dictionary the speed of textual content processing is high, but using the volume of required memory is in several times more than using basics dictionary. Morphological analysis with the use of the basics dictionary is based on inflectional analysis and precise selection of the word bases. The main problem here is related to homonymy of the word bases. For debugging check the compatibility of dedicated bases in words and its flexion. As the basis of approximate methods in morphological analysis determines the grammatical class of words by the end letters and letter combinations. At first allocate stemming from basis words. From ending word sequentially take away by one letter after another and obtained letter combinations are compared with a inflections list of appropriate grammatical class. Upon receipt of the coincidence of final part with words is defined as its basis. In conducting morphological analysis arise ambiguity of grammatical information determination, that disappear after parsing. The task of syntactic analysis is parsing sentences based on the data from the dictionary. At this stage allocate noun, verb, adjective, etc., between which indicate links in the form of dependency tree.
In the given article the main problems of electronic content-commerce system and functional services of commercial content processing are analyzed. The proposed model gives an opportunity to create an instrument of information resources processing in electronic content commerce systems (ECCS) and to implement the subsystem of commercial content formation, management and support. The process of ECCS design and creation as an Internet marketing result is iterative. It contains in its structure a number of stages (from the analysis, design and development of a plan to a prototype construction and experimental tests). The latter process begins with the specifications and layout formation, content template creation, content formation and its subsequent publishing according to the site’s structure. In the initial stages (before setting functional requirements and development process initiation) regular users are involved into the process through poll letters, alternative design and prototyping of varying degrees of readiness. Thus, valuable in formation is collected without much effort, along with both evoking users’ sense of direct involvement in the design process, as well as winning their trust. The paper analyzes sequence methods and models of information resources processing in electronic content-commerce systems. It also allocates the basic laws of the transition from commercial content formation to its implementation. The formal model of ECCS which allows to implement them in phases of the commercial content lifecycle is created. The developed formal model of information resources processing in electronic content-commerce systems allows us to create a generalized typical architecture of ECCS. The generalized typical architecture of ECCS which helps implement the processes of commercial content formation, management and realization is proposed in the paper .
Keywords: information resources, commercial content, content analysis, content monitoring, content search, electronic content of commerce system.
22. Бісікало О. В. Статистичний аналіз складних залежностей у тексті
STATISTICAL ANALYSIS OF COMPLEX RELATIONSHIPS IN TEXT
Oleh Bisikalo
Institute for Automatics, Electronics and Computer Control Systems, Vinnytsia National Technical University, 95 Khmelnytske shose St., Vinnytsia, 21021, UKRAINE, E-mail: obisikalo@gmail.com
The level of practical utility of known results of statistical analysis of the text is considerably limited by the ambiguous word problem — a key issue of Computational Linguistics. The problem is not solved at the level of single-word analysis — morphological or statistical — that is why to extract knowledge from text more complex linguistic means of syntactic and/or semantic, semantic-syntactic analysis must be used. The development of a hybrid approach that combines linguistic and statistical text analysis tool determines the relevance of the research problem to identify statistical regularities in it syntagmatic and paradigmatic (general — complex) relationships between word-forms/lemmas.
The article is devoted to obtaining new numerical information on profound text characteristics and its application to solve certain problems of Computational Linguistics efficiently. The purpose of the study is to justify theoretical and experimental (using modern tools) approach to evaluate the informativeness of statistical features and options for complex relationships between word-forms/lemmas of the text.
To achieve the goal such problems were well-posed and solved – the main points of the approach were formed and its advantages were determined in the hypothesis form, the formal concept of the subject area was suggested; the statistical and information estimates of the relationship between lemmas were obtained that technologically can be determined using modern language packs, including DKPro Core.
The objective of the research in the article is textual information analysis, and the subject of the research is the methods and models of knowledge extraction from text.
Associative-statistical approach for extracting knowledge from text based on linguistic ties between text lemmas, including certain basic concepts of the approach (for example word-form, lemma, complex relationship, linguistic system and subject area) was further developed. The last concept of subject area which was formally defined as an predicate is the most significant limitation of the proposed approach, in which the hypothesis 1 was formulated and experimentally verified – Pareto distribution is valid not only for words/word-forms/lemmas with a particular subject area, but also for the identified set of relationships between them. The statistical evaluation of text documents collections in the subject area was rationale as additional restrictions of the approach – the expected value of repetitive relationship quantity, confident intervals for the unknown expected value of the statistical population. This allowed doing an information analysis of the approach to the actual problem of determining the keywords in text, including the estimation from above of increased frequency of keywords document.
The practical value of the results of the study is to use and improve the technological capabilities of the popular linguistic package DKPro Core based on open architecture Apache UIMA, in particular with a view to the experimental verification of the hypothesis 1. It is shown that DKPro Core tools allow implementing the approach in practice and identifying the essential content features and author's style of the English text.
Keywords: word-form, lemma, complex relationship, Pareto distribution, tree ties.
23. Верес О. М. Онтологія очищення даних
ONTOLOGY DATA CLEANSING
Oleh Veres
Information Systems and Networks Department, Lviv Polytechnic National University, 12 S. Bandera Str., Lviv, 79013, UKRAINE, E-mail: oleh.m.veres@lpnu.ua
Poor data quality is one of the biggest challenges in constructing analytical solutions, so incorrect conclusions are made based on incorrect information. Data cleansing is an important step in the analytical process and how effectively it is made depends largely on the analysis correctness and accuracy of the built analytical models. Decision Support Systems (DSS) is the IT infrastructure foundation at various companies, because these systems make it possible to transform business information into clear and useful conclusions.
Data cleansing is held before data is loading into the data warehouse and the analytical application immediately before analysis. Today, there are a huge number of available methods of data cleansing from bugs and inaccuracies. It is difficult to identify the most effective way because each method is completely different approach to the problem. This article discusses the development of ontology data cleansing to simplify model building DSS and its functional components. The definition of data cleansing stages and the ways how to describe the basic methods, algorithms and approaches to implement the function of each are given. The proposed ontology is built according to the METHONTOLOGY approach which reflects the iterative design. This approach means that glossary contains all terms (concepts and their instances, attributes, actions) that are important for data cleansing, and their natural language descriptions. Data cleansing in DSS includes the following stages: data analysis; sequencing rules and data conversion; confirmation; conversion; data cleansing reverse; data preprocessing. Data analysis is identifying of error types and inconsistencies, which are subject to removal. Confirmation is the accuracy and efficiency of the process and defining conversion. Conversion is performance or changes in the ETL to load and update data warehouse, or in response to requests from a plurality of sources. Data cleansing reverse is replacement of contaminated data on primary sources during cleansing. Data preprocessing is a set of methods and algorithms used in analytical applications to prepare data to solve a particular problem and to bring them in line with requirements due to specific objectives and ways to resolve it. The appropriate methods of data cleansing which belong to Methods of Mathematical Statistics, Data Mining or other special are used at each stage. Development of technologies in data cleansing in OLTR systems and ETL process is a product of programmer and analyst’s mutual work. In the immediate preparation for analysis, data cleansing is analytical program user’s tasks, but it should not require the intervention by technical staff. Objectives and methods of data preprocessing are completely user-defined and limited only with set of tools that are granted by the system. The glossary of ontology data cleansing contains terms that are semantically can be divided into three groups: task structure (cleansing stages, connections), the data that covers the problem (methods used at each stage), and the results of calculations (cleansed data). Such approach greatly contributes to the quality of decision-making with DSS drafting.
Keywords: data, method, ontology, data warehouse, decision making, Decision Support System.
24. Висоцька В. А. Особливості моделювання синтаксису речення слов’янських та германських мов за допомогою породжувальних контекстно-вільних граматик
FEATURES MODELING SYNTAX OF SENTENCE FOR SLAVIC AND GERMANIC LANGUAGES USING GENERATIVE CONTEXT-FREE GRAMMARS
Victoria Vysotska
Information Systems and Networks Department, Lviv Polytechnic National University, 12 S. Bandera Str., Lviv, 79013, UKRAINE, E-mail: Victoria.A.Vysotska@lpnu.ua
This paper presents the generative grammar application in linguistic modelling. Description of syntax sentence modelling is applied to automate the processes of analysis and synthesis of texts in natural language.
The article shows the features of synthesis of the sentences in different languages using generative grammars. The paper considers norms and rules influence in the language on the grammars constructing course. The use of generative grammars has great potential in the development and creation of automated systems for textual content processing, for linguistic providing of linguistic computer systems, etc. In natural languages there are cases when the phenomenon which depends on the context is described as context independent, namely in terms of contexof context-free grammars. If the symbols number on the right side in the rules are not lower than the left then unshortened grammar has not got. Then at replacement of only one symbol they got a context-sensitive grammar. In the presence of only one symbol in the left side of the rule got a context-free grammar. None of these natural constraints can be applied on the left side of rules. The discription is complicated by formation of new categories and rules. The article describes the pecularities of introduction of new restrictions on data grammar classes through the introduction of new rules. Based on the importance of automated text content processing in modern information technology (for example, information retrieval systems, machine translation, semantic, statistical, optical and acoustic analysis and synthesis of speech, automated editing, knowledge extracting from the text content, text abstracting and annotation, text indexing, educational and didactic, linguistic corporus management, means of dictionary compiling, etc.) specialists actively seek new models of description and methods for automatic text content processing. One of these methods is the development of general principles of lexicographic systems of syntactic type and text content processing for specific languages.
Any tools of syntactic analysis consist of two parts: a knowledge base about a particular natural language and algorithm of syntactic analysis (a set of standard operators of text content processing based on this knowledge). The source of grammatical knowledge is data from morphological analysis and various filled tables of concepts and linguistic units. They are the result of the empirical text content processing in natural language, the experts in order to highlight the basic laws for syntactic analysis. The table base of linguistic units is sets of configurations or valencies (syntactic and semantic-syntactic dependencies). They are lexical units list/dictionaries with instructions for each of them of all possible links with other units of expression in natural language. In implementing of the syntactic analysis full independence of rules of tables data transform from their contents should be achieved. This change of the content does not require algorithm restructuring.
The vocabulary V consists of finite not empty set of lexical units. The expression on V is a finite-length string of lexical units with V. An empty string does not contain lexical items and is denoted by . The set of all lexical units over V is denoted as . The language over V is a subset . The language is displayed through the set of all language lexical units or through definition criteria, which should satisfy lexical items that belong to the language. There is another important method to set the language through the use of generative grammar. The grammar consists of a lexical units set of various types and the rules or productions set of expression constructing. Grammar has a vocabulary V, which is the set of lexical units for language expressions building. Some of vocabulary lexical units (terminal) can not be replaced by other lexical units.
The text realizes structural submitted activity that provides subject, object, process, purpose, means and results that appear in content, structural, functional and communicative criterias. The units of internal organization of the text structure are alphabet, vocabulary (paradigmatics), grammar (syntagmatic) paradigm, paradigmatic relations, syntagmatic relation, identification rules, expressions, unity between phrases, fragments and blocks. Sentences, paragraphs, sections, chapters, subchapters, page etc. are singled out on the compositional level that (except the sentence) are indirectly related to the internal structure that is why they are not considered. Content analysis for compliance thematic requests to , where is the keywords identify operator, is content categorize operator according to the keywords identified, is keywords identify conditions set, is categorization conditions set, is rubrics relevant content set. Digest set formed by such dependence as , where is digests forming operator, is conditions set for the digests formation, is rubrics relevant content set. With the help of database (database for terms/morphemes and structural parts of speech) and defined rules of text analysis the search of terms is held. Parsers operate in two stages: lexemes content identifying and a parsing tree creates.
The theory application of generative grammars for solving problems of applied and computational linguistics at the morphology and syntax level allows to create a system of speech and texts synthesis, to create practical morphology textbooks and inflection tables, to conclude the morphemes lists (affixes, roots), to determine the performance and frequency for morphemes and the frequency of realization in text of different grammatical categories (gender, case, number, etc.) for specific languages. Developed models on the basis of generative grammars are used in linguistic computer systems functioning designed for analytical and synthetic processing of textual content in information retrieval systems, etc. It is useful to introduce all latest and new restrictions on this grammar, getting their classes narrower. In describing the complex range of phenomena used set of description is limited, and the considering these features, which are served in general are obviously insufficient. Research begins with minimum means, whenever they are not enough (smaller portions), new means are gradually introduced. Thus, it is possible to determine exactly what means can or can not be used in the description of a phenomenon for its nature understanding.
Keywords – generative grammar, structured scheme sentences, information linguistic system.
25. Демчук А. Використання асоціативних правил для вироблення знань з побудови тифлокоментарів
USE OF ASSOCIATION RULES TO GENERATE KNOWLEDGE ON TYPHLOCOMMETS CONSTRUCTION
Andriy Demchuk
Information Systems and Networks Department, Lviv Polytechnic National University, 12 S. Bandera Str., Lviv, 79013, UKRAINE, E-mail: andriydemchuk@gmail.com
Abstract. In this article use of association rules to generate knowledge on typhlocommets construction is described (verbal comments are imposed on the number of audio lines of video content for understanding the story to people with visual impairments) and this allowed to find relationships between related events. The algorithm Apriori was chosen as the best one for this task.
The algorithms for finding association rules are designed to find rules X –> Y, and the value of support and confidence of the rules must be higher than some predetermined limit values which are called, respectively, the minimum support (minsupport) and minimum confidence (minconfidence).
For the first time the problem of finding association rules was proposed to determine the typical customers’ behavior when buying in supermarkets, so sometimes it is called market basket analysis.
We interviewed visually impaired people to gain knowledge in the form of rules for typhlocomments construction. While studying the problem of access for people with visual impairments to video content we should understand that only audio formatis available for such a person, and to understand the world they need additional comments to explain what is currently happening in a particular plot of video. Аccording to statistics, person perceives through vision about 82% of the information from the outside world, whereas through hearing — about 16%.
The development of mathematical process of typhlocomments videocontent through the use of association rules made it possible to formalize the videocontent construction for visually impaired people according to the results of software and algorithmic complex “Audio Editor”, which is designed to solve the problem of providing full access to the video content for people with visual impairments.
Keywords: typhlocomment, audiodescription, association rules, videocontent, IT, videocontent for visually impaired people.
26. Кісь Я. П., Висоцька В. А., Чирун Л. Б., Фольтович В. Застосування контент-аналізу для опрацювання текстових масивів даних
THE USE OF CONTENT ANALYSIS FOR TEXTUAL DATA SETS PROCESSING
Iaroslav Kis1, Victoria Vysotska2, Liliya Chyrun3, Vasyl Foltovych2
Information Systems and Networks Department, Lviv Polytechnic National University, 12 S. Bandera Str., Lviv, 79013, UKRAINE, E-mail: 1Yaroslav.P.Kis@lpnu.ua, 2Victoria.A.Vysotska@lpnu.ua, 3lchirun@mail.ru, 4vfoltovych@gmail.com
In recent decades humanity has performed a significant step in developing and implementing new technologies. Development of technologies has given the opportunity to solve a lot of complex tasks which humanity had faced with, but also it has generated new tasks, the solution of which is difficult. One of these tasks is a task of content analysis. Methods and systems of content analysis are used in various areas of human activity (politics, sociology, history, philology, computer science, journalism, medicine, etc.). These systems are quite successful and do not require large funds and time to get the desired result. At the same time using this type system allows you to increase the level of success at 60 %. Basic system of content analysis includes the following features: quick information updates, searching for information on this resourse, data collecting about the customers and potential customers, creating and editing surveys, analysis of resource visitations. If the workload is reduced at system automatization using information system of content analysis, the time for processing and obtaining the necessary information can be also reduced, productivity of work system increases which leads to a decrease in money and time expenses to get the desired result. The theme relevance has been caused by increasing users’ demands on these systems and by the following factors: rapid growth in demand for reliable information, the necessity of forming plurals of operational information as well as use of unwanted information for automatic filtering.
Development of Internet technologies and its services gave the humanity access to virtually unlimited quantity of information but as it often happens in these cases — there is a problem in reliability and efficiency. That is why technologies of content analysis are implemented to make the information efficient and trustworthy. The use of these technologies allows you to receive the information as a result of its functioning, provides an opportunity of operational interference in the system to increase the level of that system, the activity of the information resource and for popularity increase among the users. Such world's leading producers of information resources processing actively work in this direction as Google, AІІM, CM Professionals organization, EMC, IBM, Microsoft Alfresco, Open Text, Oracle, SAP.
Content analysis is a high-quality and quantitative method of information studies which is characterized by objectivity of conclusions and austerity of procedure and lies in the quantitive treatment of results with further interpretation. Content analysis is based on journalism and mass communication and uses equipment in the following empirical areas: psychiatry, psychology, history, anthropology, education, philology, literature analysis and linguistics. Overall, the methods of content analysis in these areas are connected with the use in the sociological research framework. Content analysis is rapidly developing today; it is associated with development of information and Internet technologies where this method has found wide application.
While creating an effective information system significant attention should be given to content management, because content analysis is used in the content management systems for work automation and to reduce money and time expenses. There are several stages in the content management such as: content analysis, content processing and content submission. For effective system work, firstly, the content is analyzed, then the relevant results are processed, conclusions are made, and after the content are worked on. And on the final step the content is being presented. The following methods of content analysis are: comments analysis, rating evaluation, statistics and history analysis. Comments analysis is used for analysis, adjusting and monitoring the system users’ moods who write reviews about system advantages and disadvantages or for adjusting operational and liquid information in their comments. Analysis of statistics and history is used for observation and result processing which are used to determine information efficiency and liquidity. For example, if one of the articles was visited by 100 users and another was visited only by one person, then we can certainly claimy that information in the first article is more efficient than in the second. Rating assessment is used to determine the rate of the same articles and is conducted with the help of polls, the users evaluation etc. The content in the form of articles is the base of online newspaper due to which user is looking for the necessary information. Thanks to content analysis, the system owner can determine the reliability and efficiency of the information contained in the online newspaper article. With the help of this option you can determine the popularity of the newspaper and do some actions in order to increase number of users. General recommendations in architectural design of content analysis systems are developed which, however, differ from existing by more detailed stages and availability of information processing module resources, allowing easily and efficiently to handle information resources at system developer’s stage.
Keywords: content, content analysis, information resource, content management system.
27. Кульчицький І. М. Вибір розміру вибірки для статистичних опрацювань текстів
THE DEFINING OF SAMPLE SIZE FOR STATISTICAL ANALYSIS OF TEXTS
Ihor Kulchytskyy
Applied Linguistics Department, Lviv Polytechnic National University, 12 S. Bandera Str., Lviv, 79013, UKRAINE, E-mail: bis.kim@gmail.com
The exploring of methods and techniques of quantitative study in linguistics is not going to lose topicality. One of the important areas of quantitative study is the examining of informational and statistical properties of the text. They are commonly used in the text attribution and deciphering. On the other hand, for any statistical analysis of the text it is important to correctly choose the method and quantity. This study is an attempt to establish the sufficient percentage of works of art by Marko Cheremshyna to determine the probable relative frequencies of symbols in them as well as to examine the stability of these frequencies.
The material of the research is the complete works by Marko Cheremshyna published in 1937. The main object is the relative frequency of letters in the Ukrainian alphabet. Since hyphens, apostrophes, and space (the last divides the text into words) are also used in Ukrainian texts, during calculating Marko Cheremshyna’s works are interpreted as the set of symbols of the extended Ukrainian alphabet including apostrophe, hyphen and space. As the creation of the text selections are made with the means of the computer programme, the length of the paragraph is equal to the length of text passage. The completed works are converted into electronic form and normalized. For each of them the relative frequencies of symbols of the expanded alphabet were defined. From the received samples there were created 5 identical texts for research length of about 470 000 symbols that differ only in paragraph length (about 100, 200, 300, 400 and 500 symbols respectively). The paragraph length was chosen arbitrarily, all small letters were changed into capital, text symbols that were not included in the extended alphabet were replaced by a space, only one space was left between words, the text was divided into paragraphs of fixed length to within the accuracy of a word, that if you add the word to the next paragraph and its length becomes greater than the predetermined, the word is not cut and the paragraph is left bigger to several symbols; during calculations the end symbol of the paragraph is considered as a space.
With the use of Pearson’s chi-squared test the optimal length of text passages and the amount of text works by the author was determined when the relative frequency of symbols coincides with the frequencyncounted for all works and each symbol rank of the extended alphabet in a frequency distribution was defined. In total 965 000 experiments were conducted. Thus, for Marko Cheremshyna’s works the optimal passage length is 100 characters and the sample length is about 42 % of the total one.
Keywords: quantitative study, sample, sample length, frequency, Marko Cheremshyna.
28. Кульчицький І. М., Шандрук У. С. Вплив орфографії на частоту букв у текстах
SPELLING INFLUENCE ON THE LETTERS FREQUENCY IN TEXTS
Ihor Kulchytskyy, Uliana Shandruk
Applied Linguistics Department, Lviv Polytechnic National University, S. Bandery Str., 12, Lviv, 79013, UKRAINE, E-mail: bis.kim@gmail.com
One of the most common methods for studying the vocabulary of the particular author, work of art or genre is statistical one as quantitative characteristics of the text can reveal not only the composition of the lexis, but the ratio of its different layers usage and the ratio of rare and common words. It is considered that each author has a peculiar style of writing according to qualitative and quantitative characteristics of which it can be concluded about authorship. Statistical methods allow the object of study to be the whole neutral vocabulary which is a measure of diversity or uniformity of the writer’s dictionary.
In the process of text attribution it is crucial to remember that works by the same author could be issued at different times according to different spelling. The purpose of this article is to examine the effect of spelling changes on the letters frequency in the same text. The material of the study is the first volume of the complete works by Les Martovych and three other collections of his works published in 1903, 1904 and 1922. The material was chosen in such a way because the spelling of the works written during 1903-1905 differs from modern and one of 1922 orthography. The latter, in turn, also has substantial differences from modern spelling. As not only letters but hyphens, apostrophes, and space (the latter divides text into words) are used in the Ukrainian texts as well and the main subject of this study is the relative frequency of letters in the Ukrainian alphabet, thus, in the process of Les Martovych works calculations, the texts were considered as the set of symbols of the extended Ukrainian alphabet, which includes letters, apostrophes, hyphens and spaces. First texts of all works were converted into electronic form and normalized. Then they were divided into 4 groups. The first and second groups included works published during 1903-1905 and in 2011 accordingly. The third and fourth groups included 1922 and 2011 works respectively.
To carry out an analysis all four groups of works were altered according to the following rules: all small letters were changed into capital, text symbols that were not included in the extended alphabet were replaced by a space, only one space was left between words, during calculations the end symbol of the paragraph is considered as a space. The absolute and relative frequencies of extended alphabet symbols were defined for each of the group. Results were compared according to Pearson’s chi-squared test. The comparisons showed that the frequency of letters in works by Les Martovych is significantly affected by spelling changes.
Keywords: quantitative study, frequency of letters, spelling, relative frequency, Les Martovych.
29. Кушнірецька О. І., Кушнірецька І. І., Берко А. Ю. Семантичний пошук і зберігання даних науково-технічної інформаційної системи
SEMANTIC SEARCH AND STORAGE OF DATA OF SCIENTIFIC AND TECHNICAL INFORMATION SYSTEM
Irina Kushniretska1, Oksana Kushniretska1, Andriy Berko2
1Information Systems and Networks Department, 2General Ecology and Ecoinformation Systems Department, Lviv Polytechnic National University, UKRAINE, Lviv, 12 S. Bandera Str., E-mail: 1presty@i.ua, 2berkoandriy@i.ua
This paper describes the semantic search and data storage of scientific and technical information system.
The aim of this work is the usage of existing technologies for finding problem solution of the semantic data search and data storing of scientific and technical information system by providing the content semantic of the information resource and the designing of mathematical model of the text structural representation of scientific and technical information system. The process of the semantic search and data storage of scientific and technical information is described in the paper using UML sequence diagram. It is shown that the main objects of the sequence search and document download processes are user, interactive interface, document tree, and downloading module. The proposals of semantic content structuring of scientific and technical information system with explicitly structured representation of semantic relations between information objects contained in the system have been presented. The main components of the mathematical model of ontology of scientific and technical information system for semantic search and storage of scientific and technical information resource have been determined.
The object of research is the process of semantic search and storage of data of scientific and technical information system. The subject of research is onthology usage for providing the content semantic of the information resource and the designing of mathematical model of the text structural representation of scientific and technical information system. Scientific novelty and practical value are in the usage of ontologies for problem solving of the semantic search and data storage of scientific and technical information system.
Keywords: scientific and technical information system, semantic search, scientific and technical information resources, storage of scientific and technical information resources, ontology.
30. Лозинська О. В., Давидов М. В. Побудова системи правил для комп'ютерного перекладу української жестової мови на основі аналізу її синтаксичних конструкцій
DEVELOPMENT OF RULES FOR MACHINE TRANSLATION OF UKRAINIAN SIGN LANGUAGE BASED ON ITS SYNTAX INVESTIGATION
Olga Lozynska1, Maksym Davydov2
Information Systems and Networks Department, Lviv Polytechnic National University, 12 S. Bandera Str., Lviv, 79013, UKRAINE, E-mail: 1kanmirg@gmail.com, 2mdavydov@adva-soft.com
In modern society it is necessary to provide comfortable access) to information resources for people who communicate using sign language (SL). To solve this problem it is necessary to develop specialized software tools that help in the sign language study and translation. Sign language is an independent visual-spatial language in which for information transfer hand gestures, facial expressions, lip articulation are used. Sign language has its own grammar struc¬ture that is distinct from spoken languages. Ukrainian sign language (USL) is a mean of communication system for deaf people and contains about two thousand signs, most of which are performed with both hands. There is no international sign language and even Ukrainian sign language has several dialects. One of the problems of Ukrainian sign language computer translation is the lack of a formally adopted writing system for SLs. Therefore, for Ukrainian sign language translation we must create a writing system for USL. The main problems of Ukrainian sign language computer translation are: translation ambiguity (the number of words of sign language is different from the number of words of spoken language), grammar of sign language is different from grammar of spoken language, sign language has its own word order in sentences), using finger spelling, etc. The lack of Ukrainian sign language research in the sphere of grammar complicates USL machine translation. The authors deal with development of computer Ukrainian sign language translation. One of the tasks is to create rule-based machine translation system. We built a small corpus of Ukrainian sign language sentences. USL is annotated by gloss. The modern computer translation systems of sign languages in the world were investigated. Вasic translation methods that can be used for Ukrainian sign language translation were considered. The rules for Ukrainian sign language machine translation based on its syntax investigation were constructed.
Keywords: Ukrainian sign language, bilingual corpora, computer translation, grammar.
31. Хомицька І., Теслюк В. Метод статистичного аналізу художнього стилю англійської мови на фонологічному рівні
METHOD OF STATISTICAL ANALYSIS OF THE BELLES-LETTRES STYLE OF THE ENGLISH LANGUAGE ON THE PHONOLOGICAL LEVEL
Iryna Khomytska1, Vasyl Tesliuk2
Department of Applied Stylistics, Department of Computer Aided Design Systems, Lviv Polytechnic National University, S. Bandery Str., 12, Lviv, 79013, UKRAINE, E-mail: 1iryna.khomytska@ukr.net, 2tesliuk@mail.ru
It is expedient to use the method of statistical analysis for more exact determination of quantitative and qualitative characteristic interrelation in the belles-lettres style (poetry, fiction, drama).
Precise definition of differentiating features of the belles-letters style is under study because of lots of elements penetrate from other styles into it and the elements which are common for three substyles. That is why the use of statistical method is topical.
Novelty of the investigation is a constructed model which represents substyle differentiation of the belles-lettres style on the phonological level. The model differs from the others in establishing similarity and difference between the compared substyles by respectively less and greater number of consonant phoneme groups by which the essential differences have been established with the help of the statistical method. The used statistical method made it possible to differentiate poetry, fiction and drama taking into account the phoneme position in a word. Interrelation of style and substyle factors has been represented in the suggested model. Theoretical value of the research lies in more exact identification of the place of each consonant phoneme groupin the phonological subsystem of the system of the belles-lettres style.
Practical value means the determination of the frequency characteristics by which it is possible to refer a text to a particular substyle within the belles-lettres style. The investigation can be continued on the basis of the obtained data with the aim of thorough characterization of the substyles of the belles-lettres style.
Keywords – average frequency of consonant phoneme groups, phoneme position in a word.
32. Шестакевич Т. В., Висоцька В. А., Чирун Л. В., Чирун Л. Б. Моделювання семантики речення природною мовою за допомогою породжувальних граматик
THE MODELING OF THE SEMANTICS OF SENTENCES IN NATURAL LANGUAGE USING GENERATIVE GRAMMAR
Tetiana Shestakevych1, Victoria Vysotska2,
Lyubomyr Chyrun3, Liliya Chyrun4
Information Systems and Networks Department, Lviv Polytechnic National University, S. Bandery Str., 12, Lviv, 79013, UKRAINE, E-mail: 1fishmy2006@rambler.ru, 2Victoria.A.Vysotska@lpnu.ua, 3chyrunlv@mail.ru, 4lchirun@mail.ru
The need to implement the processes of analysis and synthesis natural language texts led to the emergence of the linguistic models of the processes of their processing. There is a need for the development of many linguistic disciplines for the needs of the information sciences, in the development of automated systems for multilingual information processing. Linguistic analysis of natural language texts consists of several sequential processes – grapheme, morphological, syntactic and semantic analysis. For modeling the syntactic level of the language authors used the formalism og generative grammars, introduced by N. Chomsky. Formal analysis of the grammatical structure of phrases allows to allocate the syntactic structure (components), which is the basic pattern of the phrase, regardless of its meaning. Works of N. Chomsky and A. Gladkyy are applicable to the development of such means of natural language processing as information retrieval systems, machine translation, texts annotation, morphological, syntactic and semantic analysis of texts, educational-didactic system, to linguistic support of specialized software systems.
In this article authors present ways of using the generative grammars for modeling the syntax of sentences for different languages – English, German and Ukrainian. To do so, the syntactic structure of the sentence was parsed, demonstrated the features of the process of sentences synthesis in these languages. The influence of norms and rules of the language in the course of constructing grammars was considered. Additionally, the example of grammar application to illustrate the sentence generation with the basic nominal group scheme of the appropriate type was given. Also a list of adjectives with class indexes was given.
The growth rate of content production leads to reduction of the general level of potential user awareness, the content acquires information noise, increases the irrelevance of the content, it is duplicated, the search process in case of selecting content from a variety of information sources is complicated. To summarize large dynamic streams of content, the method of content monitoring is proposed. The input information for the content monitoring method is a text in natural language as a sequence of symbols, the source information is the partition table, sentences and tokens of the text. Content-monitoring is a software tool for automation for finding the most important components in the streams of content. A part of content monitoring is a content analysis of the text, intended for content search in the data array with meaningful linguistic units. The use of content analysis in the monitoring of online data sources automates the process of finding the most important components of the content you are retrieving from these sources. This eliminates the duplicate content, information noise, parasitic content, the redundancy in search results etc.
The generative grammar formalism during the process of content monitoring considers text as a linearly ordered set of words, phrases and sentences. the interrelatedness of linguistic and non-linearity of natural language, during usage of variations of the statistical analysis methods, is ignored. This step allows you to bring the same kind of researched content for filling out the template for the facilitation of future work.
Keywords: Generative grammar, structured scheme sentences, computer linguistic system.
PROGRAM AND PROJECT MANAGEMENT
33. Алєксєєва К. А. Методи підвищення ефективності управління комерційними веб-проектами за умов невизначеності
METHODS OF INCREASING EFFICIENCY OF COMMERCIAL WEB PROJECTS MANAGEMENT IN CONDITIONS OF UNCERTAINTY
Kateryna Alekseyeva
Social Communications and Information Activity Department, Lviv Polytechnic National University, S. Bandery Str., 12, Lviv, 79013, UKRAINE, E-mail: kateryna.alekseyeva@gmail.com
The essential problems that are solved during commercial project lifecycle are planning and preparation of the project. Project planning and preparation process involves identifying a number of characteristics that define the technology, content, commercial and other features of the project. The peculiarity of control parameters of commercial web project is the difficulty of determining their exact values. In this case, the use of methods and means of control, which are based on the principles of situational control and fuzzy logic, is appropriate. Gained experience for today in this area allows applying the principles of fuzzy logic in project management problems.
Commercial web project is a creation of a specific Internet resource by developer on demand of the customer for further receiving the income or support of his main business. One of the essential features of commercial web-projects is their focus on of the use of the result by a wide range of consumers. Therefore, the commercial component of the success of the project depends on many external and internal factors. Performer, customer and target audience of consumers determine the values of parameters, which characterize factors that have an influence on the project. At the same time, such values cannot always be set or determined with sufficient accuracy and reliability. In this case, there is a need for making project decisions, planning and implementation of project activities taking into account the absence, incompleteness or inaccuracy of some data. In this paper, fuzzy logic is selected as a tool that provides solution to the problem of commercial web projects management, taking into account all peculiarities of the project. It allows replacing the value of necessary parameters that are difficult or impossible to determine during management processes by their fuzzy linguistic counterparts. The main objective of this work is to determine the procedure and methods of formation and application of fuzzy data in technological tools of commercial web project management.
Keywords: project, Project Management, data uncertainty, Project decision making, web resources, commercial content, content analysis, Internet Marketing, fuzzy data, fuzzy logic.
34. Андруник В. А., Чирун Л. Б., Чирун Л. В. Інтелектуальний аналіз матеріально-технічного забезпечення структурної одиниці навчального закладу
INTELLECTUAL ANALYSIS LOGISTICS STRUCTURAL UNIT OF THE INSTITUTION
Vasyl Andrunyk1, Liliya Chyrun 2, Lyubomyr Chyrun3
1,2Information Systems and Networks Department, 3Software Department, Lviv Polytechnic National University, S. Bandery Str., 12, Lviv, 79013, UKRAINE, E-mail: 1phottt@ukr.net, 2lchirun@mail.ru, 3chyrunlv@mail.ru
The transition to an information society, and socio-economic changes taking place in Ukraine, require major changes in many areas of the state. First of all it concerns education reform and innovation in the learning process. To make more universal planning educational process and preparation of teaching materials for different learning systems use information systems that can not only carry out information and analytical functions, but also create conditions for the operational management distributed e-learning process, act as effective environment organization and learning management, and more. It should be noted that,at present, there are few software products that can support decision-making in the recording and analysis of the structural unit of logistics of the university in Ukraine. Most of them are used abroad and are not adapted to the legal and economic systems of Ukraine, neither is the actual development of effective specialized software for the analysis of logistics training process for structural units of the institution used. Data mining (DM, data mining) is part of the process of extracting knowledge from databases (KDD, knowledge discovery in data bases), which can reveal the essence of hidden dependencies in the data, identify mutual influences between the properties of objects, the information, on which databases are stored, identify patterns specific to the data set. Actuality of the research problem and data processing confirmed a broad practical and commercial use of intellectual analysis. Most often they are used in the field of science and business.
The main purpose and the main result of the study was the discovery of a set of attributes that most influence the decision to the success and reliability of implementing logistics in school. Based on these results in the future can be made more reasonable sampling of some, only strong, attributes for experiments on the data, or for future testing logistics. Found that with 20-38 (in different cases) attributes only 6-10 attributes affect the decision. Built intelligent system analysis of logistics structural unit enables the institution to facilitate and speed up structural unit of employees of the institution, providing quick and easy access to relevant information, improving the quality and effectiveness of provided structural unit of the institution.
Keywords – information technology, data mining, logistical support.
35. Артеменко О. І., Федченко В. М., Єгорова В. Інтелектуальна система аналізу екскурсійних маршрутів
INTELLIGENT SYSTEM FOR SIGHTSEEING TOURS CONTENT ANALYSIS
Olga Artemenko1, Volodymyr Fedchenko1, Valeriya Jegorova2
1Bukovinian University, Department of management automates systems
2Information Systems and Networks Department, Lviv Polytechnic National University, S. Bandery Str., 12, Lviv, 79013, UKRAINE, E-mail: 1o_hapon@meta.ua, 2changable92@gmail.com, 2ehorova.valeriya@gmail.com
The purpose of the study is to create an intelligent system for analysis of sightseeing tours for further development of information technology for sightseeing tours personalization taking into account the interests and capabilities of different kinds of tourists.
Relevance of the work is to create tools for collecting, data processing and analysis of behavior of tourists during the sightseeing tour to develop classification of tourists and tourist sites they visit. The study was based on the sightseeing tours tourists made on their own choosing while visiting the Chernivtsi city. The system makes it possible to follow the trend of tourists decision making, identify factors that influence the duration and cost of the sightseeing tour route and causes of changes in the itinerary during the tour.
The analysis can be used to optimize the sightseeing tour itineraries historical part of the city. The data collected in questionnaires can get a picture of the behavior of tourists on excursions, tours intensity, different kinds of tourists spending value. This allows actually classifying tourists according to different parameters. Select clusters for tourist sites of interest to certain groups of tourists. Identify errors often made by tourists in planning the tour. The analysis results in the form of association rules and clustering are used to create a knowledge base for future expert system of sightseeing tours optimization.
The practical value of the research is to create software for the analysis of decision-making by different categories of tourists on sightseeing tours. This, in turn, provides the basis for the creation of information technologies to personalize tour routes. Such information technology will be useful not only for tourists who plan to travel, but also for local authorities of tourist regions. They will be able to manage the tourist infrastructure, resulting in increased revenues from tourism and overall development.
Keywords — sightseeing tours, information technology, intelligent systems, tourism infrastructure.
36. Бойко Н. І. Багатовимірне подання даних для управління ІТ-проектами
SUBMISSION MULTIDIMENSIONAL DATA TO MANAGE IT PROJECTS
Nataliya Boyko
Information Systems and Networks Department, Lviv Polytechnic National University, S. Bandery Str., 12, Lviv, 79013, UKRAINE, E-mail: natkaboiko@rambler.ru
In the article the theoretical principles of methodological and practical recommendations to enhance the effectiveness of the information system are proposed. The analysis of the basic principles and techniques of project management of information processes grounded the methodology of corporate information systems project management. The considered project management process that is carefully planned and well organized is the opportunity for the successful implementation of any project. This process includes the development of a project plan, which in turn provides the definition and verification purposes and objectives of the project, which together define the objectives and goals of the process. This process also includes management implementation project plan to determine accurate and objective information about its effectiveness in relation to the plan and mechanisms to implement the necessary methodology and use of appropriate tools. The basic steps for project implementation and management, such as the possibility of the project (i.e. how it will be dealt with) are also noted; defining goals and strategies for its implementation, and subsequently its management, action planning, which can be achieved in the implementation of the project; implementation of solutions created by the project; evaluation, analysis and support solutions created in the project management.
The purpose of this publication is to create the necessary theoretical knowledge and practical skills for use in project management and the ability to apply IT projects for data mining with the use of OLAP-cubes.
The process of creating an IT project allows for the overall strategic goals of development, consolidation and formation model for the operation of a decision support system. This involved the appropriate information technology project management, which in turn allows you to concentrate large amounts of information in a data warehouse using tools and intelligent systems to manage IP through IT projects and systems support and decision making. Data mining technique was analyzed using OLAP-cubes. After all, the whole process of project management is a circular mechanism by which all design decisions are identified, can be planned, monitored and using common decisions — managed. Each project is unique and is used only for a specific task and for a particular subject area. The process includes project management methods and tools to be used for the description, analysis and control of analyzed data for effective decision-making information system (IS). According to a particular IP use required software project management organization that facilitates complex projects that created a step by step project schedule. According software during the project monitors the project at every turn.
Keywords: methods, tools, methodology, modeling, information, project management, information processes, information systems, information technology, enterprise information system project management.
37. Вінтоняк С., Кісь Я. П., Чирун Л. Б. Розроблення інформаційної системи для управління ресторанним бізнесом
DEVELOPMENT OF INFORMATION SYSTEM FOR MANAGEMENT RESTAURANT BUSINESS
Stepan Vintonyak1, Yaroslav Kis1, Liliya Chyrun2
Information Systems and Networks Department, Lviv Polytechnic National University, S. Bandery Str., 12, Lviv, 79013, UKRAINE, E-mail: 1Yaroslav.P.Kis@lpnu.ua, 2lchirun@mail.ru
One of the promising areas of the developing world is the restaurant business. Progress in the development of any business, including restaurant, depends on many factors, one of which is the use of information technologies.
The process of the restaurant business is very complex and requires monitoring the accounting process and behavior of staff, transactional analysis, accounting income products, forming the cost of food and semi-finished products of cancellation procedures, compliance with sanitary and technical standards. The need for automation of these processes stems from the need to incorporate a large number of parts.
Automated Information System is a set of information, economic-mathematical methods and models, technical, programmatic, technological tools and experts assigned to the information processing and decision-making. The development of information systems (IS) is to create a clear and easy to use service delivery in the restaurant business, which is focused on high-quality cooperation with clients and staff, and to increase profitability and lower costs institutions meals. The purpose is to develop an information system for analyzing areas of activity and restaurant to reduce the expenditure of time and money saving human resources and to improve management of catering, accelerating the speed and quality of customer service, to minimize fraud staff. To build an information model system, CASE-tool AllFusion ERwin Data Modeler r7.2, was used based on the information from the model information system which is based on the data model and creates a database system. The work can provide information services in the restaurant business, which operate using the following methods and tools such as a queuing system, the main advantages and requirements of which are an incoming stream of applications for service duty, discipline queue mechanism maintenance; client-server architecture, the benefits of which are no duplication of program code server client software, since all calculations are performed on the server, the requirements for the computer on which the client is installed, lower, all data is stored on the server, which is usually protected much better than most of the clients data storage; Microsoft SQL Server and Microsoft Access; programming languages C ++ and C #; platform .NET; ADO.NET. The implemented information system demonstrates the use of these methods and tools.
In the performance of the designed database software was created on the basis of the decisions taken at the system analysis domain. This information system providing services in the restaurant business is a necessary tool in the work of any restaurant and profitable in terms of software developers.
Keywords — information technology, management, decision-making in business.
38. Веретеннікова Н., Кунанець Н. Е., Пасічник В. В. Інформаційно-бібліографічне забезпечення електронної науки: досвід американських колег
MODERN LIBRARIES AND INFORMATION SUPPORT OF ESCIENCE: THE EXPERIENCE OF AMERICANS SCIENTISTS
Nataliia Veretennikova1, Nataliia Kunanets2, Volodymyr Pasichnyk
Information Systems and Networks Department, Lviv Polytechnic National University, S. Bandery Str., 12, Lviv, 79013, UKRAINE, E-mail: 1Natasha19061990@bigmir.net, 2nek.lviv@gmail.com, 2n_kunan@yahoo.com
Despite all the changes in eScience and social communications, libraries recognized the necessity to expand their functional tasks. American libraries have begun to develop a new approach to management of information resources on the basis of the user information needs. The successful experience in this area has given the opportunity to establish the cooperation between teachers and librarians in order to conduct successful research. As a result of establishing sustainable communication relations, librarians have better understood the researchers’ needs and, in turn, offered new solutions in such areas of knowledge as digital management of information resources and their corresponding distribution.
The publications on the pages of American journal “Journal of eScience Librarianship” are worthy of notice. The journal related to research in various fields of science, was founded in 2012 and was funded by the National Library of Medicine, National Institutes of Health, Department of Health and Human Services.
The aim of this article is to analyze the experience of American libraries in the sphere of information support of scientific research and the library roles in the development of eScience.
So, eScience is the research methodology, which includes collection, storage and creation of information resources and giving access to them. Also, eScience presents new and various opportunities for librarianship. The number of professional organizations including the American Library Association (ALA), the Association of Research Libraries (ARL), and the American Society for Information Science and Technology (ASIS & T), study the potential role of libraries and librarians in the field of eScience. This fact causes the creation of three-dimensional data sets in an interdisciplinary scope. Its successful development is impossible without the involvement of librarians who have to establish close cooperation with researchers and scientists.
The basis for successful information support of modern scientific research is a reliable information and communication infrastructure, which provides technology and related tools to support researchers and to promote new ways of interaction between scientists. American scientists claim that there are several stages of the research cycle. They distinguish the main six stages: Generate Ideas, Manage Information, Write Proposal, Perform Research, Publish Results, Preserve Research.
Information data goes through its own lifecycle. American scientists believe that the information data passes its lifecycle in eight stages, namely: plan, collect, assure, describe, preserve, discover, integrate and analyze. Libraries are becoming important social institutions in the context of supporting science and scientists among controlled data and in the field of eScience. DataOne is an interagency, multinational and multidisciplinary project, where the creation of organizational structures is realized, and it gives preference to librarians as important information community in these processes. The support of full information data lifecycle should be happened in collaboration with librarians in the field of biological, ecological and environmental science and providing the development of user-friendly tools in order to give access to researchers, teachers and the scientific community.
Ключові слова: електронна наука, інформаційно-комунікаційна інфраструктура, книгозбірня, інформаційне суспільство.
39. Висоцька В. А., Нога А. Ю., Козлов П. Ю. Управління Web-проектами електронного бізнесу для реалізації комерційного контенту
WEB PROJECT MANAGEMENT OF E-BUSINESS FOR COMMERCIAL CONTENT SELLING
Victoria Vysotska1, Andrian Noha2,
Pavlo Kozlov3
Information Systems and Networks Department, Lviv Polytechnic National University, S. Bandery Str., 12, Lviv, 79013, UKRAINE, E-mail: 1Victoria.A.Vysotska@lpnu.ua, 2roman.noha@gmail.com, 3pblcbko@gmail.com
Rapid development of the Internet contributes to increasing needs in receiving operative data of productive and strategic nature and implementation of new forms of information service. Documented information prepared in accordance with user needs is an information product or commercial content and main object of e-commerce processes. The issue of design, development, implementation and maintenance of e-commerce content is relevant in view of factors such as lack of theoretical foundation of standardized methods and the need for unification of software processing of information resources. Practical factor of processing of information resources in electronic content commerce systems (ECCS) is related to solving problems with increase in content amount on the Internet, active development of e-business, rapidly spreading Internet accessibility, extension set of information products and services, increasing demand for commercial content. Principles and technologies of electronic content commerce are used in creating online stores, systems for on-line and offline sale of content, cloud storage and cloud computing. The world's leading manufacturers of informational resources processing tools such as Apple, Google, Intel, Microsoft, Amazon work in this direction. The aim was to develop methods and software of processing information resources to improve the efficiency of e-commerce content systems due to increased sales of commercial content. The article is devoted to the development of standardized methods and software of processing of information resources of e-commerce content systems. In this paper an actual scientific problem of development and research in methods and means of information resources processing ECCS was solved with the use of designed classification, mathematical providing and software and generalized ECCS architecture. ECCS classification was researched and improved on the basis of analyzing and evaluating such systems. It made it possible to determine, detail and justify the choice of their functional possibilities for commercial content lifecycle designing.
The task of developing methods and software formation, management and maintenance of information products was resolved with a theoretically grounded concept by automating information resources processing in ECCS to increase content selling for constant user, by involvement of potential users and expanding the boundaries of the target audience.
Keywords – information resources, information technologies, electronic business, commercial content realisation, electronic commerce system classification, commercial content formation.
40. Галущак М. О., Бунь Р. А. Просторове моделювання та аналіз процесів емісії парникових газів під час видобування і перероблення кам’яного вугілля у Польщі
SPATIAL MODELING AND ANALYSIS OF PROCESSES OF GREENHOUSE GAS EMISSIONS FROM EXTRACTION AND PROCESSING OF COAL IN POLAND
Mariia Halushchak, Rostyslav Bun
Applied Mathematics Department
Lviv Polytechnic National University, S. Bandery Str., 12, Lviv, 79013, UKRAINE, E-mail: rost.bun@gmail.com
Due to the emergence of anthropogenic greenhouse effect and climate change caused by it, the spatial inventory of greenhouse gas emissions in the coal industry is an actual scientific task. Such an inventory is useful for planning the environmental protection measures on the regional level. The aim of this work is the development of geographic information technology and mathematical models for estimating processes of greenhouse gases emissions in the coal industry, and the realization of their spatial inventory.
The mathematical description of the processes of greenhouse gas emissions resulting from coal mining and burning of fossil fuels during coking was created. The specialized geoinformation technology for spatial assessment of greenhouse gas emissions, which is based on elaborated mathematical models, and the use of the formed database with geospatial input information about the coal industry was developed.
The fugitive greenhouse gas emissions that arise from mining and post-mining processes, as well as emissions from burning coal, oil, natural gas and biomass in the coke plants and the fugitive emissions that arise during coking coal were examined. The digital maps of location of mines and coke plants in Poland were created, the layers with geospatial data about the structure of GHG emissions in the coal industry in Poland, taking into account specific emission factors for these objects, were formed. Based on performed numerical experiments the geospatial database and digital map of GHG emissions in Poland were obtained. The results of the inventory of greenhouse gas emissions were visualized by thematic digital maps. Due the analysis of results the major GHG emission sources were identified.
Keywords: geoinformation technology, mathematical modeling, greenhouse gas emission, spatial inventory, coal industry.
41. Грицик В. В., Грицик В. В., Зозуля А. М. Базові системні структури синтезу систолічних систем опрацювання даних у реальному часі
DATA PROCESSING SYSTOLIC SYSTEMS SYNTHESIS IN REAL TIME IS RESEARCHED
Volodumyr Hrytsyk1, Volodumyr Hrytsyk2,
Andrij Zozulya2
1Information Systems and Networks Department, Lviv Polytechnic National University, S. Bandery Str., 12, Lviv, 79013, UKRAINE
2Computer Systems and Networks Department, Ternopil Ivan Puluj National Technical University, Ruska Str., 56, Ternopil, 46001, UKRAINE, E-mail: ssriii@i.ua
Automated intelligent processing of data from different receivers one-dimensional, two-dimensional, three-dimensional advanced information technology is the actual development today.
Development and study of the parallel-hierarchical systems of data processing and software and logic integrated schemes as models of effective methods, tools in the study of information and analytical systems of complex systems, processors are necessary in this regard. Promising new high productive uniform systems of data processing in parallel-hierarchical structures in combination with multi-system to explore are needed. Moreover, it is important to develop and create a uniform of systolic structures to ensure real-time model.
The basic system structures of synthesis of complex systolic systems for data processing tasks by parallel algorithms are considered in this paper. Those parallel algorithms are part of hardware-oriented methods for system structures of information and analytical systems in real time implementing
Important scientific and applied problems are solved; information analysis system for specific subject areas of knowledge and new models of data processing in real-time theoretical bases of system is developed; methods and data processing architecture of an effective information-processing system in real time are investigated based on the theoretical studied. Based on the examining of the different synthesis methods and building the unique calculate environments, we conclude that it is possible to use the basic section of the “multiplication-addition” and “add” and the transmitting processor elements based on systolic data processing system defining the basic probability characteristics of Markov processes. Data processing in real-time information and analytical systems based on homogeneous computing environments, systolic methods of data processing and computer vision images problem-oriented system are presented.
Keywords – paralleling synthesis algorithms, data processing, uniform computing environment, providing real-time.
42. Іванущак Н. М., Пасічник В. В. Узагальнена модель еволюції мережевого ансамблю в умовах дестабілізаційних загроз
GENERALIZED MODEL OF THE EVOLUTION OF NETWORK ENSEMBLE IN CONDITIONS OF DESTABILIZING THREATS
Nataliya Ivanuschak1, Volodymyr Pasichnyk2
1Computer Systems and Networks Department, Chernivtsi national University after Yuriy Fedkovych, M.Kotsyubynsky Str., 2, Chernivtsi, 58000, UKRAINE, E-mail: ivanuschak@yandex.ua
2Information Systems and Networks Department, Lviv Polytechnic National University, S.Bandery Str., 12, Lviv, 79013, UKRAINE, E-mail: vpasichnyk@gmail.com
In this paper we solve the problem, which is essential for the development of new methods of mathematical modelling, such as research and rational approaches that allow identifying the structure and parameters of the models of local computer networks based on observational data; the development of methods and tools for mathematical modelling of structures of information and communication networks (for example, computer networks), which is essential for improving the functioning of these networks and protection of their elements from the aimed attacks. A scientific problem of mathematical modelling of real local computer networks based on their probability characteristics was solved. Methods that allow identification of the structure and parameters of the models of local computer networks based on observations data were investigated and proved. A new method for generating the structure of local computer networks with a given density function of the degree distribution of nodes with the use of the theory of complex networks was suggested, making it possible to reproduce these networks as stochastic graphs with given probability properties and to assess the development and behaviour of real computer networks by changing their structural properties and to explore vulnerabilities in the network in the scenario of targeted attacks on their nodes.
The suggested mathematical models of the reproduction of scenarios attacks were used to evaluate the vulnerability of the simulated stochastic graphs and to solve the problem of stability of scale-free computer networks to random and targeted attacks. The abstract model of a computer network provides prognostic assessment of attacking actions of various categories of offenders through the implementation of the most common threats to network security scenarios, such as directed and random attacks on the nodes. This prediction does not require a large use of resources inherent in the automated systems security analysis. Random attacks (denial, crashes, R-attacks) using random selection of attacking node. The classic strategy of targeted attacks (I-attacks) is a consistent destruction of nodes with maximum connectivity. The generalized model of the evolution of agent network ensemble in terms of destabilizing threats was investigated. The main components of the descriptive design of network are a model of threats and a security model. These models enable the description by comparing threats, close to real, exploring complex attacks on the system rather than in their traditional interpretation of the theory of complex networks. The optimum strategy of protection of local networks from targeted attacks was defined.
Keywords: computer networks, stochastic graph, system of security analysis.
43. Катренко А. В., Пастернак О. В. Математичні моделі інвестування в галузі інформаційних технологій
MATHEMATICAL MODELS OF INVESTMENT IN FIELD OF INFORMATION TECHNOLOGY
Anatoly Katrenko1, Olena Pasternak2
Information Systems and Networks Department, Lviv Polytechnic National University, S. Bandery Str., 12, Lviv, 79013, UKRAINE, E-mail: 1anatolkatrenko@gmail.com, 2summery17@gmail.com
Projects in the field of information technology are a high-tech, innovation and they will be high-earning, if implementation is successful, but projects are characterized by a high level risk and uncertainties. And if in other areas enough mathematical models of investment were developed for projects, the information technology situation would be different and requires development. The development of mathematical models that adequately take into account the particular investment in IT is important because their practical application will reduce the risk and value of potential losses. Also, these models will be orientated on making justified optimal decisions about determination of aggregate investments, the beginning and the end of its implementation, investment allocation for the period. In the paper basic factors, that affect the efficiency of IT investments – namely: the degree of IT infrastructure development, volume of distribution area, level of competition, the cost of modern IT in the investment country, are systemized. Their structural parts and impact on components of the investment efficiency in IT were investigated. The authors proposed a multi objective model of investment allocation in IT for the period and election of the initial date of investment. Methods for making decision and elaborated recommendations for using these methods were analysed.
Keywords – investment, information technologies, quality criterion, mathematical model, methods for making decision.
44. Стрямець О. С., Бунь Р. А., Стрямець С. П., Данилів Р. І. Геопросторовий аналіз поглинань та емісій парникових газів лісами Польських Карпат
GEOSPATIAL ANALYSIS OF GREENHOUSE GAS ABSORPTION AND EMISSION BY FORESTS IN POLISH CARPATHIANS
Oleksandr Striamets1, Rostyslav Bun2,
Sergiy Stryamets3, Roksolana Danyliv4
1Information Systems and Networks Department,
2Applied Mathematics Department
3Automated Control Systems department
4Department of Pscyhology, Pedagogics and Social Management
Lviv Polytechnic National University, S. Bandery Str., 12, Lviv, 79013, UKRAINE, E-mail: 1alexandrtrue@gmail.com, 1striamets@gmail.com, 2rost.bun@gmail.com
The study of forest ecosystems as main carbon sinks is relevant due to the presence of anthropogenic greenhouse effect and caused by it climate change effect. The practical significance of such research is to develop recommendations for optimizing the age structure and species composition of forests to increase the carbon deposition.
The aim of this work is to develop the mathematical and software tools for analysis of geospatial emission and absorption of greenhouse gases in forestry sector on the example of Polish Carpathians region, building geospatial databases and digital maps of forest, and evaluation of the deposited carbon on the basis of statistical data based on volume of wood, tree species composition, distribution of age classes, and using other forestry indicators, based on official sources.
In this study the results of geospatial modeling and analysis of carbon flows in the main components of phytomass in forests of Subcarpathian, Lesser Poland, and Silesian voivodeships in Poland are presented. The multi layer digital map has been compiled, and geospatial analysis of deposited carbon in forests of Poland has been conducted. The geoinformation technology of spatial analysis of greenhouse gases absorptions and emissions in forestry of Poland as well as forming digital maps of deposited carbon have been created. The statistical data on stocks of vegetation, species composition, distribution by age classes as well as other forest inventory indicators published by the official sources were used in this investigation.
A digital map of forests was elaborated, including Subcarpathian, Lesser Poland, and Silesian voivodeships. Information layers were formed with data on the structure, stand composition, stock, phytomass, growth, deposited carbon etc. The respective reservoirs of greenhouse gases emission, such as wood destroyed by fire, dead, damaged and withdrawn timber, were taken into account.
Keywords: information technology, digital maps, geographic information system, forest inventory, greenhouse gas, carbon deposited.
45. Рубан М. Л., Осадчий О. В., Лесніков А. Г. Діагностичний комплекс оцінювання функціонального стану людини за показниками фотоплетизмограми
DIAGNOSTIC COMPLEX FOR EVALUATION OF FUNCTIONAL STATE OF HUMAN OF HOTOPLETHYSMOGRAPHY INDICATORS
Marуna Ruban, Lyesnikov Andrew,
Alexander Osadchiy
National Technical University of Ukraine “Kyiv Polytechnic Institute”, Prospekt Pobedy, 36, Kiev, 03056, UKRAINE, E-mail: ruban3103@gmail.com
Today, the healthcare of foreign manufacturers and domestic market in Ukraine contains a lot of the latest developments of pulse oximeter. Some devices have a major disadvantage, namely the limited functionality of the software, supplied with a pulse oximeter. It is possible to record in the database only the value of blood oxygenation and heart rate, and the parameters of photoplethysmography curves (PPG) cannot be explored, which is quite informative.
Software development with advanced features is relevant and can extend the device functionality, providing the processing PPG parameters in real time. It helps to determine the functional state of the human body quickly and without unnecessary equipment.
The aim is to develop algorithms and software that allow managing efficiently and recording various PPG parameters of human, namely the development of software that refuses disadvantages, which are available in standard software Pulsmet.
The proposed software “UtasOxi”, which was implemented by the programming language Delphi determines the selected parameters as analog standard software Pulsmeter, which is developed by UTAS (Ukraine), the pulse oximeter “YUTASOKSY-200."
Software “UtasOxi” is fast and easy software for recording saturation values, heart rate, photoplethysmographic curves. Measurements are taken in real time and displayed in three certain graphs. After the measurements data can be stored in the file tables Excel.
Using developed software, it is possible to analyze the adaptation of the human body to physical training. Currently, the authors are working on the development of distinct parameters to determine the state of the human adaptation. And in the future, using the developed software it will be possible to determine the state of the patient adaptation through the mathematical operations, where the informative values of the hotoplethysmography amplitude parameters are determined by the change in amplitude and monitored by the time, for which photoplethysmography parameters return to their original state, the state before the physical training. Due to the developed software the number of measurements per second was increased and it reached the mark of 150 measurements. The amplitude data analysis can be provided with received pulse wave parameters and can predict the human body adaptation to physical training.
Keywords: functional state, photoplethysmography.
46. Угрин Д. І., Шевчук С. Ф., Гаць Б. М, Баляснікова О. А. Планування та моделювання платформи туристичного бізнесу на основі хмарної технології і її захист даних
PLANNING AND SIMULATION OF THE PLATFORM BASED ON TOURISM BUSINESS CLOUD TECHNOLOGY AND ITS DATA PROTECTION
Dmitry Ugrin1, Sergey Shevchuk1, Bogdan Hats2,
Oksana Balyasnykova1
1Information Systems Department, Chernivtsi faculty of National Technical University “Kharkiv Polytechnic Institute”, Golovna, Str., 203-A, Chernivtsi, 58032, UKRAINE, E-mail: ugrind@mail.ru
2Bukovynskiy University, Darvina, Str., 2-A, Chernivtsi, 58000, UKRAINE, E-mail: gats@i.ua, gatsb@yandex.ru
Similar service processes of certain areas of the tourism industry organizations in various fields allow the development and implementation of standard software solutions for planning the development of tourism. However, the introduction of a large number of “local” software for geographically distributed organizations, travel expenses leads to significant time and material resources for the timely transmission of new versions of customers, tuning, maintenance, updating, modification and monitoring of software.
The practical implementation of the concept of cloud computing in solving tourism business planning requires the development of software system (platform) that will have a wide range of functionality and data protection. This will solve the need to ensure the implementation and maintenance of software and solving optimization problems arising in travel agencies and other organizations of the sector (hotels, tourist complexes, bases of treatment and rest). Furthermore, clouds as personal and public use should provide adequate level of data protection from various threats.
“Cloud” services planning and management of complex processes of travel agencies and tourist operators to hotels, holiday and medical and sports facilities were developed separately, depending on the type of tourism. Within prototyping platforms offered such major decisions must be consolidated: service hotel and restaurant, service, health and medical, historical and sports service, business service, travel tours.
The use of “cloud” services for the planning and management of tourism business can reduce the financial costs associated with deployment, maintenance and updating of both software and hardware. The paper describes the infrastructure of cloud technology platform for the development of tourism business planning and proposed its architecture, based on five performance levels of service tourist business, database management systems, data storage, application server, web server, the client software. In addition to the work described and defined library of software modules for solving optimization problems and show planning and modeling data protection platform tourism.
Keywords — information platform planning tourism business, cloud computing, data protection and security, information services infrastructure of tourism.
47. Устенко С. В., Бібко О. О. Використання методів біоніки в інтелектуальних інформаційних системах
USE OF METHODS OF BIONICS IN INTELLIGENT INFORMATION SYSTEMS
Stanislav Ustenko, Olga Bibko
Economic Information Systems Department, Kyiv National Economic University, Lvivska sq., 14, Kyiv, 04053, UKRAINE, E-mail: stasustenko@mail.ru
A new direction in the development of artificial intelligence methods is swarm intelligence methods that simulate collective intelligence of the public living beings. This area little studied, but gives good results in solving various optimization problems. It shows the prospects of its further development. The main feature of the optimization methods based on collective intelligence is their bionic nature, i.e. they are based on modeling the activities of animals, whose behavior is of a collective nature. This behavior allows solving various important practical problems in nature efficiently, indicating the high efficiency of these methods to solve practical complex optimization problems. Consequently, it is important to study the bionic methods and possible applications, to develop the new mathematical models based on the social behavior of animals, in the context of solving optimization problems in which these methods have not been applied.
The aim of the article is to study the possibilities of bionic methods based on collective intelligence as solutions to optimization problems in various technical fields (on the example of a bee colony optimization).
A bee colony optimization is a probabilistic metaheuristic, where the probabilities are set on the basis of information about the quality of the solutions obtained from the previous solution. They can be used for both static and dynamic combinatorial optimization problems. Convergence is guaranteed, in any case, we get the optimal solution, but the convergence rate is unknown.
After analyzing the existing modifications and different applications of a bee colony optimization, we have identified such advantages as:
– the method is unwilling to loops in the local optima, because it is based on random search;
– multi-agent implementation;
– search for the best solutions based on the decisions of all agents of the bee colony;
– can be used in dynamic tasks, because it can adapt to environmental changes;
– can be used for both discrete and continuous optimization problems.
Keywords: agent, bee colony, bionic method, swarm intelligence, optimization, self-organization.
УДК 004.942
Н. М. Іванущак1, В. В. Пасічник2
1Чернівецький національний університет імені Юрія Федьковича,
2Національний університет “Львівська політехніка”,
кафедра інформаційних систем та мереж.
УЗАГАЛЬНЕНА МОДЕЛЬ ЕВОЛЮЦІЇ МЕРЕЖЕВОГО АНСАМБЛЮ В УМОВАХ ДЕСТАБІЛІЗАЦІЙНИХ ЗАГРОЗ
© Іванущак Н. М., Пасічник В. В., 2015
Розроблено нову математичну модель генерування структури локальних комп’ютерних мереж та узагальнену модель еволюції мережевого ансамблю в умовах дестабілізаційних загроз, розв’язано задачу про стійкість мереж до випадкових та спрямованих атак.
Ключові слова: комп’ютерні мережі, стохастичний граф, системи аналізу захищеності.
In this paper we have developed a new mathematical model for generating the structure of local computer networks and the generalized model for the evolution of network ensemble in conditions of destabilizing threat to solve the problem of the stability of networks to random and targeted attacks.
Key words: computer networks, stochastic graph, system security analysis.
Література – 9
УДК 004.738.5
К. А. Алєксєєва
Національний університет “Львівська політехніка”,
кафедра соціальних комунікацій та інформаційної діяльності
МЕТОДИ ПІДВИЩЕННЯ ЕФЕКТИВНОСТІ УПРАВЛІННЯ КОМЕРЦІЙНИМИ ВЕБ-ПРОЕКТАМИ ЗА УМОВ НЕВИЗНАЧЕНОСТІ
© Алєксєєва К. А., 2015
Запропоновано метод управління контентом як етап його життєвого циклу, який ґрунтується на застосуванні нечіткої логіки. Метод управління контентом описує процеси формування комерційних web-ресурсів та спрощує технологію управління контентом. Описано способи і процедури формування проектних рішень в управлінні комерційними web-проектами за умови неповноти та неточності деяких характеристик проекту. Проаналізовано основні чинники прийняття проектних рішень, визначено причини та природу виникнення неповноти і неточності проектних характеристик. Розроблено процедури зменшення рівня неповноти та неточності характеристик проекту на основі нечіткої логіки. Запропонований метод дає можливість створити засоби опрацювання web-ресурсів та реалізувати підсистему управління контентом. Завдання управління контентом: формування та ротація оперативних і ретроспективних баз даних; персоналізація роботи користувачів, збереження персональних запитів користувачів і джерел, ведення статистики роботи; забезпечення пошуку в базах даних; генерація вихідних форм; інформаційна взаємодія з іншими базами даних; формування та опрацювання web-ресурсу. Підсистему управління контентом реалізовано його кешуванням (генерує сторінку один раз; надалі вона завантажується з кешу, який оновлюється автоматично після закінчення деякого терміну або при внесенні змін до певних розділів web-ресурсу, або за командою адміністратора) або за допомогою інформаційних блоків (збереження блоків на етапі редагування web-ресурсу та збирання сторінки з цих блоків під час її запиту користувачем).
Ключові слова: web-проект, управління проектами, невизначеність даних, прийняття проектних рішень, web-ресурс, комерційний контент, контент-аналіз, Інтернет-маркетинг, нечіткі дані, нечітка логіка.
The method of content management as its life cycle stage based on Fuzzy Logic is proposed. The method of content management describes the commercial web resources forming and automation technology that simplifies the content management. Ways and procedures of Project decision making in management of commercial web-projects under conditions of incomplete and inaccuracy of some are described in the paper. Principal factors of project decision making were analyzed, reasons and nature of project characteristics of incomplete and inaccuracy are defined. Procedures for reducing project characteristics of incomplete and inaccuracy levels based on fuzzy logic are developed. The proposed method gives an opportunity to create an instrument of web resources processing and to implement the subsystem of content management. Tasks of content management are: operational and retrospective database formation and rotation; the user experience personalization; personal user queries and sources storing; operation statistics analysis; search providing in database; initial forms generation on information resources; information interaction with other databases; Web resource formation and processing. Content management subsystem is implemented through its caching (generates a page once; then it is several times faster loaded from the cache, which is updated automatically after a certain period of time or when making changes to specific sections of an Web resource, or by administrator command) or information blocks formation (blocks conservation in the Web resources editing stage and page collection from these blocks at the user’s request).
Key words: project, Project Management, data uncertainty, Project decision making, web resources, commercial content, content analysis, Internet Marketing, fuzzy data, fuzzy logic.
Література – 11
УДК 004.9
В. А. Андруник1, Л. Б. Чирун1, Л. В. Чирун2
Національний університет “Львівська політехніка”,
1кафедра інформаційних систем та мереж,
2кафедра програмного забезпечення
ІНТЕЛЕКТУАЛЬНИЙ АНАЛІЗ МАТЕРІАЛЬНО-ТЕХНІЧНОГО ЗАБЕЗПЕЧЕННЯ СТРУКТУРНОЇ ОДИНИЦІ НАВЧАЛЬНОГО ЗАКЛАДУ
© Андруник В. А., Чирун Л. Б., Чирун Л. В., 2015
Нові інформаційні, телекомунікаційні технології сприяють оптимізації управління навчальним процесом. Запропоновано структуру побудови ІС-аналізу матеріально-технічного забезпечення структурної одиниці навчального закладу.
Ключові слова: інформаційні технології, інтелектуальний аналіз даних, матеріально-технічне забезпечення.
New information, telecommunication technologies contribute to the optimization in the management of studies. In the article the author suggests the structure of IS analysis of support's guide.
Key words: information technology, data mining, logistical support.
Література – 21
Артеменко О. І., Єгорова В. І., Федченко В. М. Інтелектуальна система аналізу екскурсійних маршрутів
УДК 004.825, 004.942
О. І. Артеменко 1, В. В. Єгорова 2, В. М. Федченко 1
1Буковинський університет,
кафедра автоматизованих систем управління.
2Національний університет “Львівська політехніка”,
кафедра інформаційних систем та мереж
ІНТЕЛЕКТУАЛЬНА СИСТЕМА АНАЛІЗУ ЕКСКУРСІЙНИХ МАРШРУТІВ
© Артеменко О. І., Єгорова В. В., Федченко В. М., 2015
Створено інтелектуальну систему аналізу екскурсійних маршрутів. На її основі розроблено базу знань для дорадчого засобу вибору екскурсійного маршруту. Проаналізовано результати останніх досліджень технологій аналізу даних просторового переміщення туристів. Розроблено засоби збирання, обробки та аналізу даних про переміщення та витрати туристів під час екскурсій. Отримані результати комп’ютерних розрахунків дають змогу визначити тенденції в прийнятті рішень туристами на екскурсіях та стануть основою бази знань майбутньої експертної системи оптимізації екскурсійного маршруту.
Ключові слова: екскурсійні маршрути, інформаційні технології, інтелектуальні системи, туристична інфраструктура.
An intelligent system for sightseeing tour was developed. Using the results of sightseeing tours analysis the knowledge base was created for advisory program tool for tour route selection and real-time tracking. An analysis of recent researches in data mining of tourist spatial movement was made. Program tools for data collection, processing and analysis of movement and tourist expenditures during the sightseeing tour were developed. The results of computer simulations can determine trends in tourist decision making on sightseeing tours and will form the basis of future knowledge base of the tour route optimization expert system.
Key words: sightseeing tours, information technology, intelligent systems, tourism infrastructure.
Література – 16
УДК 004.9
О. І. Артеменко1, В. В. Пасічник2, В. В. Єгорова2
1Буковинський університет м. Чернівці,
кафедра автоматизованих систем управління.
2Національний університет «Львівська політехніка»,
кафедра інформаційних систем та мереж.
ІНФОРМАЦІЙНІ ТЕХНОЛОГІЇ В ГАЛУЗІ ТУРИЗМУ. АНАЛІЗ ЗАСТОСУВАНЬ ТА РЕЗУЛЬТАТІВ ДОСЛІДЖЕНЬ
© Артеменко О. І., Пасічник В. В., Єгорова В. В., 2015
Стаття подана у формі аналітичного огляду новітніх інформаційних технологій в сфері туризму. Автори подають характеристику найвагоміших досліджень, які проводять у провідних лабораторіях з проблематики електронного туризму провідні спеціалісти галузі. Проведено аналіз результативності досліджень та виявлено низку актуальних завдань у сфері інформаційних технологій, зорієнтованих на галузь туризму, які потребують виконання.
Ключові слова: туризм, інформаційні технології, е-туризм, системи супроводу подорожі, системи підтримки прийняття рішень, мобільні інформаційні технології.
The article has the form of an analytical review of new information technologies in tourism. The author presents the most important characteristics of the research made by leading specialists in e-tourism industry. The analysis of the impact of research works is made and a number of urgent problems in IT-oriented tourism sector are found.
Key words: tourism, information technology, e-tourism, in-trip systems, decision support systems, mobile information technology.
Література – 73
УДК 004.9
О. В. Бісікало
Вінницький національний технічний університет, кафедра автоматики та інформаційно-вимірювальної техніки
СТАТИСТИЧНИЙ АНАЛІЗ СКЛАДНИХ ЗАЛЕЖНОСТЕЙ У ТЕКСТІ
© Бісікало О. В., 2015
Розглянуто обґрунтування підходу до застосування складних залежностей між словоформами для розв’язання задач семантичного аналізу тексту. Сформульовані основні положення підходу та визначені у вигляді гіпотез основні його переваги. Запропоновано формальне поняття предметної області. Отримано статистичні та інформаційні оцінки зв’язків між лемами тексту, які технологічно можна визначити за допомогою сучасних лінгвістичних пакетів, зокрема DKPro Core.
Ключові слова: словоформа, лема, складна залежність, розподіл Парето, дерево зв’язків.
The approach to the application of complex dependencies between word-forms in resolving the semantic text analysis problems has been grounded in the article. General points and main advantages of the approach have been formulated. A formal notion of the subject area has been suggested. The statistical and information estimates of the relations between lemmas have been obtained.They can be determined technologically using modern language packs (DKPro Core).
Key words: word-form, lemma, difficult dependency, Pareto distribution, tree ties.
Література – 9
УДК 004.652
А. Ю. Берко1, К. А. Алєксєєва2
2Національний університет “Львівська політехніка”,
1кафедра загальної екології та екоінформаційних систем,
2кафедра соціальних комунікаціх та інформаційної діяльності
ОПРАЦЮВАННЯ НЕОДНОРІДНИХ ДАНИХ В IНФОРМАЦIЙНИХ РЕСУРСАХ web-СИСТЕМ
© Берко А. Ю , Алєксєєва К. А., 2015
Описано метод інтегрованого опрацювання неоднорідних інформаційних ресурсів web-систем, який ґрунтується на моделі подання даних як узгодженого поєднання значень, правил їх зображення, правил інтерпретації та структури. Метод передбачає декомпозицію загального процесу на підпроцеси інтеграції значень, синтаксису даних, семантики і структури. Перевагою такого підходу до інтеграційних процесів є можливість їх виконання на рівні метасхем даних, що зменшує кількість звернень до власне даних web-систем, обсяги яких можуть бути значними.
Ключові слова: web-ресурс, значення даних, інтеграція даних, розподілені системи даних, неоднорідні дані.
In the paper the method of integrated processing of heterogeneous information resources web-systems is described. This method is based on the model of data description as a coherent combination of data values, rules of data representation, interpretation rules and data structure. The method involves decomposition of general process into subprocesses of data values integration, data syntax integration, semantics and structure integration. The advantage of this approach is that the integration process can be performed at data metascheme level. It allows to reduce the number of access operation to very large data sets of web-systems.
Key words: web-resource, data value, data integration, distributed data systems, heterogeneous data.
Література – 10
УДК 004.738.5
А. Ю. Берко1, В. А. Висоцька2, Л. В. Чирун3
Національний університет “Львівська політехніка”,
1кафедра загальної екології та екоінформаційних систем,
2кафедра інформаційних систем та мереж,
3кафедра програмного забезпечення
ЛІНГВІСТИЧНИЙ АНАЛІЗ ТЕКСТОВОГО КОМЕРЦІЙНОГО КОНТЕНТУ
© Берко А. Ю., Висоцька В. А., Чирун Л. В., 2015
У цій роботі проаналізовано основні проблеми електронної контент-комерції та функціональних сервісів опрацювання комерційного контенту. Запропонована модель дає можливість створити засоби опрацювання інформаційних ресурсів в системах електронної контент-комерції (СЕКК) та реалізувати підсистеми формування, управління та супроводу комерційного контенту. Процес проектування та створення СЕКК за результатами Інтернет-маркетингу є ітеративним і містить у своєму складі низку етапів від аналізу, проектування, розроблення плану до створення прототипу і експериментальних випробувань, починаючи з формування специфікацій, верстання, створення шаблону контенту, формування контенту та його подальше розміщення згідно з структурою сайта. На початкових етапах до визначення функціональних вимог і початку процесу розроблення до процесу залучають кінцевих користувачів за допомогою листків опитування, альтернатив проектування і прототипів різного ступеня готовності. Без значних зусиль збирають цінну інформацію, одночасно викликаючи у користувачів відчуття безпосередньої участі в процесі проектування та завойовуючи їхню довіру. Проаналізовано способи та моделі послідовності опрацювання інформаційних ресурсів в системах електронної контент-комерції та виділено основні закономірності переходу від процесів формування комерційного контенту до його реалізації. Створено формальну модель систем електронної комерції, що дало змогу реалізувати етапи життєвого циклу комерційного контенту. Розроблено формальні моделі опрацювання інформаційних ресурсів у системах електронної контент-комерції, що дало змогу створити узагальнену типову архітектуру системи електронної контент-комерції. Запропоновано узагальнену типову архітектуру системи електронної контент-комерції, що дало змогу реалізувати процеси формування, управління та реалізації комерційного контенту.
Ключові слова: інформаційний ресурс, комерційний контент, контент-аналіз, контент-моніторинг, контентний пошук, система електронної контент-комерції.
In the given article the main problems of electronic content commerce and functional services of commercial content processing are analyzed. The proposed model gives an opportunity to create an instrument of information resources processing in electronic content commerce systems (ECCS) and to implement the subsystem of commercial content formation, management and support. The process of ECCS design and creation as an Internet marketing result is iterative. It contains in its structure a number of stages (from the analysis, design and development of a plan to a prototype construction and experimental tests). The latter process begins with the specifications and layout formation, content template creation, content formation and its subsequent publishing according to the site’s structure. In the initial stages (before setting functional requirements and development process initiation) regular users are involved into the process through poll letters, alternative design and prototyping of varying degrees of readiness. Thus, valuable information is collected without much effort, along with both evoking users’ sense of direct involvement in the design process, as well as winning their trust. The paper analyzes sequence methods and models of information resources processing in electronic content-commerce systems. It also allocates the basic laws of the transition from commercial content formation to its implementation. The formal model of ECCS is created, which allows the implementation in phases of the commercial content lifecycle. The developed formal model of information resources processing in electronic content-commerce systems allows us to create a generalized typical architecture of ECCS. The generalized typical architecture of ECCS is proposed in the paper, which helps implement the processes of commercial content formation, management and realization.
Key words: information resources, commercial content, content analysis, content monitoring, content search, electronic content commerce system.
Література – 53
УДК 007.51(075.8)
Н. І. Бойко
Національний університет “Львівська політехніка”,
кафедра інформаційних систем та мереж.
БАГАТОВИМІРНЕ ПОДАННЯ ДАНИХ ДЛЯ УПРАВЛІННЯ ІТ-ПРОЕКТАМИ
© Бойко Н. І., 2015
Обґрунтовано теоретичні положення, наведено методичні та практичні реко¬мендації, що дають змогу підвищити дієвість функціонування інформаційної системи. Наведено результати аналізу основних принципів та методів управління проектами інформаційних процесів та обґрунтовано методологію формування корпоративної ін¬формаційної системи управління проектами. Розглянуто процес створення ІТ-проекту, що дає змогу врахувати загальні стратегічні цілі розвитку, об’єднання та формування моделі для експлуатації системи підтримки прийняття рішень. Проаналізовано методику інтелектуального аналізу даних за допомогою OLAP-кубів.
Ключові слова: метод, інструменти, методологія, моделювання, інформація, управління проектами, інформаційний процес, інформаційна система, інформаційна технологія, корпоративна інформаційна система управління проектами.
The article presents the theoretical principles and proposes methodological and practical recommendations to enhance the effectiveness of the information system. The analysis of the basic principles and techniques of project management and information processes is presented and methodology of corporate information systems project management is grounded. The process of creating IT project, which allows for the overall strategic goals of development, consolidation and formation model for the operation of a decision support system, is described. Data mining technique using OLAP-cubes is analysed.
Key words: methods, tools, methodology, modeling, information, project management, information processes, information systems, information technology, enterprise information system project management.
Література – 17
УДК 004.738.5
С. М. Вінтоняк, Я. П. Кісь, Л. Б. Чирун
Національний університет “Львівська політехніка”,
кафедра інформаційних систем та мереж
РОЗРОБЛЕННЯ ІНФОРМАЦІЙНОЇ СИСТЕМИ ДЛЯ УПРАВЛІННЯ РЕСТОРАННИМ БІЗНЕСОМ
© Вінтоняк С. М., Кісь Я. П., Чирун Л. Б., 2015
Нові інформаційні технології сприяють оптимізації прийняття рішень у бізнесі. Запропоновано структуру побудови та спосіб практичної реалізації ІС надання послуг у ресторанному бізнесі.
Ключові слова: інформаційні технології, інтелектуальний аналіз даних, прийняття рішень у бізнесі.
New information technologies contribute to optimizing decision-making in business. In the article the structure and the practical implementation of information systems for providing services in the restaurant business are presented.
Key words: information technology, data mining, decision-making in business.
Література – 20
УДК 004.021
А. С. Василюк, Т. М. Басюк
Національний університет “Львівська політехніка”,
кафедра інформаційних систем та мереж
ІНТЕЛЕКТУАЛЬНИЙ АНАЛІЗ ПАРАМЕТРІВ УНІТЕРМІВ
© Василюк А. С, Басюк Т. М., 2015
Описано означення параметрів унітермів. Наведено алґоритм обчислення геометричних параметрів унітермів. Синтезовано математичну модель. Цю модель мінімі¬зовано і побудовано. Досліджено алґоритм обчислення параметрів унітермів.
Ключові слова: унітерм, алгоритм, математична модель, геометричні параметри.
This article is about determination of properties of unitherms. The algorithm of calculating geometrical parameters of unitherms is introduced. The mathematical model is synthesized. This model is then minimized and built. The algorithm of calculating geometrical parameters of unitherms is probed into.
Key words: unitherm, algorithms, mathematical model, geomentrical parameters.
Література – 7
УДК 004.652
О. М. Верес
Національний університет “Львівська політехніка”,
кафедра інформаційних систем та мереж
ОНТОЛОГІЯ ОЧИЩЕННЯ ДАНИХ
© Верес О. М., 2015
Описано етапи процесу очищення даних у СППР. Запропоновано та описано концепти онтології очищення даних. Проведено аналіз методів і технологій очищення даних на кожному з етапів процесу з врахуванням його особливостей. Побудована онтологія очищення даних для методологічної систематизації методів у реалізації функціональних елементів моделі СППР.
Ключові слова: дані, метод, онтологія, сховище даних, прийняття рішення, система підтримки прийняття рішень.
This article describes the steps to clear data in the DSS. The ontology concepts of clear data were proposed and described. The analysis of methods and data cleansing technology were carried out at every stage of the process, taking into account its features. Built The ontology of data cleaning techniques for methodological systematization of functional elements in the implementation model of DSS was built.
Key words: data, method, ontology, Data Warehouse, decision making, Decision Support System.
Література – 15
УДК 023
Н. В. Веретеннікова, Н. Е. Кунанець, В. В. Пасічник
Національний університет “Львівська політехніка”,
кафедра інформаційних систем та мереж
ІНФОРМАЦІЙНО-БІБЛІОТЕЧНЕ ЗАБЕЗПЕЧЕННЯ ЕЛЕКТРОННОЇ НАУКИ: ДОСВІД АМЕРИКАНСЬКИХ КОЛЕГ
© Веретеннікова Н. В., Кунанець Н. Е., Пасічник В. В., 2015
Розглянуто методологічні засади електронної науки, сформульовано базові концепти феномену електронної науки, описано особливості та переваги інформаційного забезпечення електронної науки за кордоном, також звернено увагу на ті лінгвістичні поняття, які найчастіше зустрічаються при дослідженні особливостей її розвитку.
Ключові слова: інформаційне забезпечення, електронна наука, інформаційно-комунікаційна інфраструктура, життєвий цикл даних, бібліотека, бібліотекар.
In this article the methodological principles of eScience are described, the basic concepts of the phenomenon of eScience are formulated, the features and advantages of eScience information support abroad are outlined, also it is drawn attention to the linguistic concepts that are often occurred during the study of its development.
Key words: information support, e-Science, cyber infrastructure, research cycle of data, library, librarian.
Література – 40
УДК 81:004.93
В. А. Висоцька
Національний університет “Львівська політехніка”,
кафедра інформаційних систем та мереж
ОСОБЛИВОСТІ МОДЕЛЮВАННЯ СИНТАКСИСУ РЕЧЕННЯ СЛОВ’ЯНСЬКИХ ТА ГЕРМАНСЬКИХ МОВ ЗА ДОПОМОГОЮ ПОРОДЖУВАЛЬНИХ КОНТЕКСТНО-ВІЛЬНИХ ГРАМАТИК
© Висоцька В. А., 2015
Описано застосування породжувальних граматик у лінгвістичному моделюванні. Опис моделювання синтаксису речення застосовують для автоматизації процесів аналізу та синтезу природно-мовних текстів.
Ключові слова: породжувальні граматики, структурна схема речення, інформаційна лінгвістична система.
This paper presents the generative grammar application in linguistic modelling. Description of syntax sentence modelling is applied to automate the processes of analysis and the synthesis of texts in the natural language.
Key words: generative grammar, structured scheme sentences, information linguistic system.
Література – 68
УДК 004.738.5
В. А. Висоцька, А. Ю. Нога, П. Ю. Козлов
Національний університет “Львівська політехніка”,
кафедра інформаційних систем та мереж
УПРАВЛІННЯ WEB-ПРОЕКТАМИ ЕЛЕКТРОННОГО БІЗНЕСУ ДЛЯ РЕАЛІЗАЦІЇ КОМЕРЦІЙНОГО КОНТЕНТУ
© Висоцька В. А., Нога А. Ю., Козлов П. Ю., 2015
Запропоновано модель життєвого циклу контенту в системах електронної комерції. Модель описує процеси опрацювання інформаційних ресурсів у системах електронної контент-комерції та спрощує технологію автоматизації управління контентом. Проаналізовано основні проблеми електронної комерції та функціональних сервісів управління контентом.
Ключові слова: інформаційний ресурс, контент, система управління контентом, життєвий цикл контенту, система електронної контент-комерції.
In the given article content lifecycle model in electronic commerce systems is proposed. The model describes the processes of information resources processing in the electronic content commerce systems and simplifies the content automation management technology. In the paper the main problems of e-commerce and content function management services are analyzed.
Key words: information resources, content, content management system, content lifecycle, electronic content commerce system.
Література – 25
УДК 004.738.5
В. А. Висоцька1, Л. В. Чирун2
Національний університет “Львівська політехніка”,
1кафедра інформаційних систем та мереж,
2кафедра програмного забезпечення
ФОРМАЛЬНА МОДЕЛЬ ОПРАЦЮВАННЯ ІНФОРМАЦІЙНИХ РЕСУРСІВ В СИСТЕМАХ ЕЛЕКТРОННОЇ КОНТЕНТ-КОМЕРЦІЇ
© Висоцька В. А., Чирун Л. В, 2015
Проаналізовано основні проблеми електронної контент-комерції та функціональ¬них сервісів опрацювання комерційного контенту. Запропонований метод дає можливість створити засоби опрацювання інформаційних ресурсів у системах електронної контент-комерції та реалізувати підсистему управління комерційним контентом.
Ключові слова: Web-ресурс, контент, контент-аналіз, контент-моніторинг, кон¬тентний пошук, система електронної контент-комерції.
The main problems of electronic content commerce are analyzed and functional services of commercial content management are explored. The proposed method gives an opportunity to create an instrument of information resources processing in electronic commerce systems. It also enables the implementation of the commercial content management subsystem.
Key words: Web resources, content, content analysis, content monitoring, content search, electronic content commerce systems.
Література – 3
УДК 004.942
М. О. Галущак, Р. А. Бунь
Національний університет “Львівська політехніка”,
кафедра прикладної математики
ПРОСТОРОВЕ МОДЕЛЮВАННЯ ТА АНАЛІЗ ПРОЦЕСІВ ЕМІСІЇ ПАРНИКОВИХ ГАЗІВ ПІД ЧАС ВИДОБУВАННЯ І ПЕРЕРОБЛЕННЯ КАМ’ЯНОГО ВУГІЛЛЯ У ПОЛЬЩІ
© Галущак М.О. Бунь Р.А., 2015
Розроблено математичні моделі для просторового аналізу процесів емісії парникових газів, які виникають при видобуванні і переробці кам’яного вугілля у Польщі. Створено цифрову карту розміщення шахт і коксовень. Удосконалено геоінформаційну технологію, за допомогою якої сформовано георозподілену базу даних і здійснено необхідні обчислення. Отриманні оцінки емісій парникових газів представленні за допомогою цифрових карт.
Ключові слова: геоінформаційна технологія, математичне моделювання, емісії парникових газів, просторова інвентаризація, вугільна промисловість.
Mathematical models for the spatial analysis of the processes of greenhouse gases emission from the mining and transformation of coal in Poland were elaborated. The digital maps of mines’s and coking’s places were created. GIS technology, which makes it possible to form a geo-distributed database was improved and the necessary calculations were done. The obtained assessment of greenhouse gas emissions was represented by the digital thematic map.
Keywords: geoinformation technology, mathematical modeling, greenhouse gas emission, spatial inventory, coal industry.
Література – 13
УДК 004; 004.02;004.35;004.9
В. В. Грицик1, В. В. Грицик2, А. М. Зозуля 2.
1Національний університет “Львівська політехніка”,
кафедра інформаційних систем та мереж.
2Тернопільський національний технічний університет ім. І. Пулюя
Базові системні структури синтезу систолічних систем опрацювання даних У реальному часі
© Грицик В.В., Грицик В.В., Зозуля А.М., 2015
Досліджено базові системні структури синтезу складних систолічних систем опрацювання даних у задачах розв’язання паралельних алгоритмів апаратно-орієнтованих методів системних структур реалізації інформаційно-аналітичних систем реального часу.
Ключові слова: вплив паралельних інформаційних технологій.
The basic system structures of synthesis of complex systolic systems for data processing tasks by parallel algorithms are considered in this paper. These parallel algorithms are part of hardware-oriented methods for system structures of information and analytical systems in real time implementing.
Key words: Impact of parallel information technologies.
Література – 13
УДК 004.032.6
А. Б. Демчук
Національний університет “Львівська політехніка”,
кафедра інформаційних систем та мереж
ВИКОРИСТАННЯ АСОЦІАТИВНИХ ПРАВИЛ ДЛЯ ВИРОБЛЕННЯ ЗНАНЬ З ПОБУДОВИ ТИФЛОКОМЕНТАРІВ
© Демчук А. Б., 2015
Описано розроблення математичного забезпечення процесу тифлокоментування відеоконтенту за асоціативними правилами. Це дало змогу формалізувати побудову відеоконтенту для осіб з вадами зору.
Ключові слова: тифлокоментування, аудіодескрипція, асоціативні правила, відеоконтент, інформаційні технології, відеоконтент для осіб з вадами зору.
The development of mathematical support process of typhlocomment video content through the use of of associative rules is discribed. This made it possible to formalize the construction of video content for people with visual impairments.
Key words: typhlocomment, audiodescription, association rules, videocontent, IT, videocontent for sightless.
Література – 4
УДК 004.942:004.772
І. М. Дронюк, О. Ю. Федевич
Національний університет “Львівська політехніка”,
кафедра автоматизованих систем управління.
АНАЛІЗ ТРАФІКУ КОМП’ЮТЕРНОЇ МЕРЕЖІ НА ОСНОВІ ЕКСПЕРИМЕНТАЛЬНИХ ДАНИХ СЕРЕДОВИЩА WIRESHARK
© Дронюк І. М., Федевич О. Ю., 2015
Проаналізовано трафік комп’ютерних мереж, отриманий за допомогою аналізатора мережевих протоколів Wireshark. Спостереження проводилось за такими показниками: сумарна кількість пакетів, середня кількість пакетів, середній розмір пакета та середня швидкість передавання пакетів. Отримані дані використовуються для перевірки теоретичних моделей.
Ключові слова: трафік, комп’ютерна мережа, аналізатор мережевих протоколів, швидкість передачі даних.
This article analyzes the changes in traffic networks, obtained via Wireshark network protocol analyzer. Observations have been conducted by the following parameters: the total number of packets, the average number of packets, average packet size and average bit rate packages. The received data is used to test theoretical models.
Key words: traffic, computer network, network protocol analyzer, bit rate.
Література – 10
УДК 004.738.5
Я. П. Кісь, В. А. Висоцька, Л. Б. Чирун, В. М. Фольтович
Національний університет “Львівська політехніка”,
кафедра інформаційних систем та мереж.
застосування контент-аналізу
для опрацювання ТЕКСТОВИХ МАСИВІВ ДАНИХ
© Кісь Я. П, Висоцька В. А., Чирун Л. Б., Фольтович В. М., 2015
Запропоновано методи аналізу контенту для інтернет-газети. Модель описує процеси опрацювання інформаційних ресурсів у системах аналізу контенту та спрощує технологію автоматизації управління контентом. Проаналізовано основні проблеми синтаксичного та семантичного аналізу контенту та функціональних сервісів управління контентом.
Ключові слова: контент, аналіз контенту, інформаційний ресурс, система управління контентом.
This article presents the content analysis techniques for online newspapers. The model describes the processing of information resources in content analysis and simplifies automation technology of content management. In this paper the basic problem of the syntactic and semantic analysis of content and functionality of content management services is analysed.
Key words: content, analysis of content, information resource, content management system.
Література – 10
УДК 004.9
А. В. Катренко, О. В. Пастернак
Національний університет “Львівська політехніка”,
кафедра інформаційних систем та мереж
МАТЕМАТИЧНІ МОДЕЛІ ІНВЕСТУВАННЯ В ГАЛУЗІ ІНФОРМАЦІЙНИХ ТЕХНОЛОГІЙ
© Катренко А. В., Пастернак О. В., 2015
Розглянуто основні фактори, що впливають на ефективність інвестицій в ІТ, досліджені їх структурні складові та вплив на ефективність інвестування. Запропоновано багатокритерійну модель розподілення інвестицій в ІТ за періодами та обрання початкового моменту інвестування, проаналізовано можливі методи отримання рішень на ній та розроблено рекомендації щодо застосування цих методів.
Ключові слова: інвестування, інформаційні технології, критерій якості, математична модель, методи отримання рішень.
This article examines a question of basic factors that affect the efficiency of IT investments. Structural components and the effect on investment were investigated. A multi objective model of investment allocation in IT for the period and election of the initial date of investment was proposed. The methods for making decision and elaborated recommendations for using these methods were analysed.
Key words: investment, information technologies, quality criterion, mathematical model, methods for making decision.
Література – 18
УДК: 004.451.45, 004.451.8
М. І. Ковалик, Р. М. Камінський
Національний університет “Львівська політехніка”,
кафедра інформаційних систем та мереж
ОСОБЛИВОСТІ ВЗАЄМОДІЇ КОМПОНЕНТІВ У МОБІЛЬНІЙ ПЛАТФОРМІ ANDROID
© Ковалик М. І., Камінський Р. М., 2015
Описано головні способи взаємодії між сервісами та активностями у системі Android. Описано переваги і недоліки кожного з підходів, а також ситуації, коли конкретний підхід найоптимальніший.
Ключові слова: активність, інтент, багатоканальний приймач, міжпроцесна взаємодія.
The main ways of interaction between Services and Activities in Android are described in the article. Advantages and disadvantages of each approach are described. The situations where a particular approach is most appropriate are dealt with.
Key words: Service, Activity, Intent, BroadcastReceiver, Interprocess Communication.
Література – 8
УДК 004.852; 004.94
П. О. Кравець
Національний університет “Львівська політехніка”,
кафедра інформаційних систем та мереж
МАТРИЧНА CТОХАCТИЧНА ГРА З Q-НАВЧАННЯМ
© Кравець П. О., 2015
Розроблена модель матричної стохастичної гри для прийняття рішень в умовах невизначеності. Запропоновано метод Q-навчання для розв’язування стохастичної гри з апріорі невідомими матрицями виграшів. Виконано формулювання ігрової задачі, описано марківський рекурентний метод та алгоритм для її розв’язування. Отримано та проаналізовано результати комп’ютерного моделювання стохастичної гри з Q-навчанням.
Ключові слова: стохастична гра, умови невизначеності, Q-навчання, марківський рекурентний метод.
The model of matrix stochastic game for decision-making in the conditions of uncertainty is developed. The method of Q-learning for stochastic game solving with a priori unknown gains matrices is offered. The formulation of a game problem is executed. The Markovian recurrent method and algorithm for the game solving are described. Results of computer modelling of stochastic game with Q-learning are received and analysed.
Key words: stochastic game, uncertainty conditions, Q-learning, Markovian recurrent method.
Література – 13
УДК 811.161.2’33:519.25
І. М. Кульчицький
Національний університет “Львівська політехніка”,
кафедра прикладної лінгвістики
Вибір розміру вибірки для статистичних опрацювань текстів
© Кульчицький І. М., 2015
Працю присвячено одному із важливих напрямків квантитативних досліджень мови та мовлення – вивченню інформаційно-статистичних властивостей тексту. Здійснено спробу встановити для творів Марка Черемшини відсоток авторських текстів, який достатній для аналізу вірогідних відносних частот символів у його творах та дослідити стійкість цих частот. Зроблено низку висновків про розмір уривків тексту, з яких формується текст-вибірка для статистичних обстежень.
Ключові слова: квантитативні дослідження, вибірка, обсяг вибірки, частота, Марко Черемшина.
The article is dedicated to one of the most important areas of quantitative studies of language and speech that is the study of information and statistical properties of text. An attempt was made to establish Cheremshyna’s literary works percentage sufficient to analyze relative frequencies of characters in his works and to investigate the stability of these frequencies. A number of conclusions were made about the size of text passages which may form text-sample for statistical surveys.
Key words: quantitative study, sample, sample size, frequency, Marko Cheremshyna.
Література – 17
УДК 811.161.2’33:519.25
І. М. Кульчицький, У. С. Шандрук
Національний університет “Львівська політехніка”,
кафедра прикладної лінгвістики
Вплив орфографії на частотНІСТЬ букв у текстах
© Кульчицький І. М., Шандрук У. С., 2015
Розглянуто один із важливих напрямків квантитативних досліджень мови та мовлення – вивчення інформаційно-статистичних властивостей тексту. Здійснено спробу перевірки на творах Леся Мартовича впливу орфографії на відносну частотність букв у текстах. Зроблено відповідні висновки.
Ключові слова: квантитативні дослідження, частота букв, орфографія, відносна частота, Лесь Мартович.
The article is dedicated to one of the most important areas of quantitative studies of language and speech that is the study of information and statistical properties of text. On the basis of works by Les Martovych an attempt was made to verify impact of spelling on the relative frequency of letters in the text. A number of relevant conclusions were made.
Key words: quantitative study, frequency of letters, spelling, relative frequency, Les Martovych.
Література – 35
УДК 004.652
О. І. Кушнірецька1, І. І. Кушнірецька1, А. Ю. Берко2
Національний університет “Львівська політехніка”,
1кафедра інформаційних систем та мереж,
2кафедра загальної екології та екоінформаційних систем
СЕМАНТИЧНИЙ ПОШУК І ЗБЕРІГАННЯ ДАНИХ НАУКОВО-ТЕХНІЧНОЇ ІНФОРМАЦІЙНОЇ СИСТЕМИ
© Кушнірецька О. І., Кушнірецька І. І., Берко А. Ю., 2015
Описано семантичний пошук і зберігання даних науково-технічної інформаційної системи. Наведено пропозиції щодо семантичного структурування контенту науково-технічної інформаційної системи з явним структурованим представленням семантичних зв'язків між інформаційними об'єктами, що містяться в системі. Визначено основні складові математичної моделі онтології науково-технічної інформаційної системи для семантичного пошуку і зберігання науково-технічного інформаційного ресурсу.
Ключові слова: науково-технічна інформаційна система, семантичний пошук, науково-технічний інформаційний ресурс, зберігання науково-технічних інформаційних ресурсів, онтології.
This paper describes the semantic search and storage of data of scientific and technical information system. The proposals of semantic structuring of the content of scientific and technical information system with explicitly structured representation of semantic relations between information objects contained in the system have been presented. The main components of the mathematical model of ontology of scientific and technical information system for semantic search and storage of scientific and technical information resource have been determined.
Key words: scientific and technical information system, semantic search, scientific and technical information resources, storage of scientific and technical information resources, ontology.
Література – 11
УДК 004.89
В. В. Литвин, М. Я. Гопяк
Національний університет “Львівська політехніка”,
кафедра інформаційних систем та мереж
АПРОКСИМАЦІЯ ДОСТОВІРНОСТІ ІНФОРМАЦІЙНИХ ОБ’ЄКТІВ ОНТОЛОГІЇ ПРЕДМЕТНОЇ ОБЛАСТІ НА ОСНОВІ ПОЛІНОМІАЛЬНИХ СПЛАЙНІВ
© Литвин В. В., Гопяк М. Я., 2015
Запропоновано метод апроксимації коефіцієнта достовірності інформаційних об’єктів онтологій предметної області на основі поліноміальних сплайнів. Розроблений метод дає змогу видаляти зайві об’єкти онтології, межа достовірності яких нижча від певного наперед заданого порогу.
Ключові слова: онтологія, апроксимація, достовірність, сплайн, база знань.
The method of coefficient approximation of information objects domain ontologies reliability is proposed in the article. This method is based on polynomial splines. It makes it possible to remove unnecessary ontology objects that possess the reliability limit below a certain pre-specified point.
Key words: ontology approximation, reliability, spline, knowledge base.
Література – 20