Home > Archives > Latest issue


International Review on Computers and Software - January 2008 - Papers





International Review on Computers and Software - Papers


go to top    Security Enhancement of Route Optimization in Mobile IPv6 Networks
        by A. Mehdizadeh, S. Khatun, B. M. Ali, R. S. A. Raja Abdullah, G. Kurup


Abstract - Mobile IPv6 (MIPv6) allows Mobile Node (MN) to be always addressable by its home address. Route Optimization (RO) is standard in MIPv6 to route packets between MN and Correspondent Node (CN) using shortest possible path. It provides better bandwidth and faster transmission. RO greatly increases the security risk. This is one of the main reasons that IPv6 is not implemented yet. However, IPSec is used to protect signaling between MN and Home Agent. In this paper, focus is given on enhanced security scheme in terms of RO based Test-bed evaluation experiment. An enhanced security algorithm is developed on top of MIPv6 RO to secure data and prepare a safe communication between MN and CN. This algorithm is able to detect and prevent the attacker from modifying the data with using an encryption algorithm by cost of little bit increase but tolerable delay. The real-time network Test-bed is implemented to prove the efficiency of proposed method. The experimental results show that the proposed security scheme increases the security performance of the network. This gives advantage of safe communication that can significantly improve the data security of RO while maintaining the quality of other network performance.

Copyright 2008 Praise Worthy Prize S.r.l. - All rights reserved


Keywords: IPv6 Test-bed, Mobile IPv6, Route Optimization, Security.



go to top    Dependency Analysis of Risks in Information Security

        by S. Kondakci


Abstract - This paper presents an abstract concept of security planning processes using a simple model to express conditional risk factors. This analytical work emphasizes relationships through major security planning phases. The work discusses the chain of logical events that describe dependence in risk propagation and proves the theorem of the causal risk propagation through the subsequent planning phases. This unique work can provide a useful guidance for efficient security planning and risk management applicable to various engineering fields. Because of its generic feature, it can also be applied to multi-disciplinary dependency analyses, quality control, and to development of risk assessment tools and techniques. This risk analysis method can also provide a theoretical basis in education of information security. Theoretical risk assessment in information security is not thoroughly undertaken. Security researches should provide approaches that are theoretically sound as well as practical and realistic.
Copyright 2008 Praise Worthy Prize S.r.l. - All rights reserved


Keywords: Foundations of computer security, information security education, quantitative risk assessment, probabilistic risk propagation.



go to top    Self Organization and Emergence: Overview and Examples
        by  A. Lemouari, M. Benmohamed


Abstract - Biological evolution has generated a rich variety of successful solutions as optimized strategies can be inspired from nature. One interesting example is the ant colonies, which are able to exhibit a collective intelligence, while their dynamic is still simple. The emergence of different patterns depends on the pheromone trail, leaved by the foragers. It serves as positive feedback mechanism for sharing information. In this paper, we firstly present an overview of the self organization and emergence and give the principal characteristics of both concepts. Secondly, two examples are used: the first example deals with the view that self organization is a cause; the emergent propriety can be the result of a self organization process. The other example deals with the principle that self organization is an effect, in this case the emergence is self organized.

Copyright 2008 Praise Worthy Prize S.r.l. - All rights reserved


Keywords: Self organization, Emergence, Exclusion process, Ants system, Pheromone.



go to top    Integration of Dependability Features in a Synchronous Application
        by F. Boulanger


Abstract - We present here an overview of a new approach with associated tools, to implement dependability strategies for applications that follow the reactive synchronous approach. Starting from the description of an application as a graph of interconnected components, we model dependability policies as transformations of this graph. The transformed graph describes a new version of the application that integrates dependability features such as multiple copies of some components, voters that compare the outputs from copies of a component, or behavior checkers that compare the behavior of a component to an expected behavior. The graph transformations rely explicitly on the assumption that the components obey a synchronous execution model.
The design of the dependability policies is not addressed. Our goal is only to provide dependability experts with a language for describing such policies and for integrating them into an application. The integration is done off-line and generates a new application with a structure that won't change at runtime. However, runtime changes in the structure of an application are possible and are discussed at the end of this article.

Copyright 2008 Praise Worthy Prize S.r.l. - All rights reserved


Keywords: Architecture description language, Dependability, Model transformation.



go to top    On the Use of Meta-Modelling and Graph Grammars to Generate Petri Nets Models for Business Processes

        by R. El Mansouri


Abstract - In business process modeling, there are several potential control flow problems that, if not detected prior to workflow deployment may lead to control flow anomalies and behavioral inconsistencies like deadlock, livelock, imperfect termination, and multiple task repetitions. Petri Nets provide a powerful formal modeling method based on solid mathematical fundament while having graphical representation of system models as net diagrams and provide various analysis techniques such as reachability tree, incidence matrix and invariant analysis method, through which properties of the Petri Net model such as liveness, reachability and deadlock can be analyzed. Since business processes and Petri net models are graphs, Meta-Modeling and Graph grammars seems to be a natural way to transform business processes to their equivalent Petri nets models. In this paper we propose an automatic approach based on Meta-Modeling and Graph grammars to generate Petri net models for business processes using AToM3. The approach is illustrated through an example.

Copyright 2008 Praise Worthy Prize S.r.l. - All rights reserved


Keywords: AToM3, Business Process Modeling, Petri nets, Meta-Modeling, Graph grammars.



go to top    Management Software Strategy with Mobile Agent

        by M. Bernichi, F. Mourlin


Abstract - Mobile agents can physically travel across a network, and perform tasks on machines that provide agent hosting capability. This allows on one hand processes to migrate from computer to computer and on the other to split into multiple instances that are executed on different machines, before returning to their point of origin. This capability is essential for scheduling software and applying asset management strategy. A mobile agent is able to check-in and check-out resources, inventory assets with parsing log file, automatically discover anomalies, track usage and depreciation, etc. Our framework is based on a mobile agency using mobile agents. The document contains a description where each agent has its roadmap and a description of its activity and so collaborates to a global observation of the network. Our approach is illustrated with precise results about a web application.

Copyright 2008 Praise Worthy Prize S.r.l. - All rights reserved


Keywords: Asset Management, Mobile 1gent, Community.



go to top    Blind Watermarking Integrated to Wavelet Image Coding Scheme

        by A. Ouled Zaid, A. Makhloufi, A. Bouallegue, C. Olivier, A. Nait-Ali


Abstract - With the blossoming demand for efficiently storing and transmitting digital information, image compression has been increasingly vital. However, at low bitrates, lossy compression becomes a strong attack to watermarking process. In this work, we combine security solutions with wavelet based image compression by integrating copyright protection directly into the coding/decoding schemes. Taking advantage of the wavelet based coding architecture; a quantization based watermark embedding is applied on the wavelet coefficients. Experimental results have shown that the adopted watermarking strategy exhibits high robustness to compression attacks with respect to the reconstruction quality. Moreover, it provides a significant capacity improvement compared to other hybrid watermarking coding methods in the literature.

Copyright 2008 Praise Worthy Prize S.r.l. - All rights reserved


Keywords: Blind watermarking, wavelet transform, lossy image compression.



go to top    Adaptive Image Fusion Scheme Based on Contourlet Transform and Machine Learning

        by M. H. Malik, S. A. M. Gilani, Anwaar-ul-Haq


Abstract - Adaptive image fusion scheme based on the combination of contourlet transform, Kernel Principal Component Analysis (K-PCA), Support Vector Machine (SVM) and Mutual Information (MI) is proposed. Contourlet is well suited to image fusion scheme because of its properties, such as localization, multiresolution, directionality and anisotropy. K-PCA operates on low frequency subband to extract feature and SVM is applied to high frequency subbands to obtain a composite image with extended information. Moreover, Mutual Information (MI) is used to adjust the contribution of each source image in the final fused image. Performance evaluation is carried out by using recently developed metric, Image Quality Index (IQI). The proposed scheme outperforms previous approaches both subjectively and quantitatively, and this is evident from the experimental results and findings.

Copyright 2008 Praise Worthy Prize S.r.l. - All rights reserved


Keywords: Contourlet Transform, Image Fusion, Kernel Principal Component Analysis (K-PCA), Mutual Information (MI), Support Vector Machine (SVM).



go to top    Evaluating of a Fuzzy Chip by Hardware-In-the-Loop (HIL) Simulation
        by E. Duman, H. Can, E. Akın


Abstract - In this study, an example of hardware-in-the-loop (HIL) simulation is proposed in order to use in developing control applications that contain a fuzzy logic controller chip. A microcontroller having fuzzy logic and fuzzy inference mechanism allows a remarkable speed up in comparison with conventional controllers which based on only Boolean logic. In order to guarantee to choose of optimum parameters for such a real fuzzy controller system it should be tested either in real-world or in a lower-cost and easier way. A real fuzzy microcontroller can be able to test with a simulation environment instead of a real-system. A simulation of a DC-motor which was developed in an object-oriented language and a real design of a fuzzy logic controller system are presented. In experiments, the fuzzy logic controller system was adopted to react with a simulation drive running in a PC so the I/O procedures were implemented by using a data-acquisition board. The reasonable results on the step response behavior of DC motor angular velocity control system are shown for two different reference values.

Copyright 2008 Praise Worthy Prize S.r.l. - All rights reserved


Keywords: Fuzzy Controller Design, HIL Simulation, Fuzzy Chip.



go to top    GraphConnect: Framework of Discovering Closed Highly Connected Pattern from Semistructured Dataset
        by F. L. Gaol, B. Widjaja


Abstract - Semistructured data appears when the source does not impose a rigid structure on the data, such as the web, or when data is combined from several heterogeneous sources. In mathematical terms, we called semi structured data set as graph data set One particular interesting in mining semi structured pattern is finding frequent highly connected subgraph in large relational graphs. The common problem is to find not only frequent graphs, but also graphs that satisfy the connectivity constraint. We identify three major characteristics different from the previous frequent graph mining problem. First, in relational graphs each node represents a distinct object. No two nodes share the same label. In biological networks, nodes often represent unique objects like genes and enzymes. Secondly, relational graphs may be very large. Thirdly, the interesting patterns should not only be frequent but also satisfy the connectivity constraint. . In order to handle these new challenges, we identify two issues have to be solved: (1) how to mine frequent graphs efficiently in large relational graphs, and (2) how to handle the connectivity constraint. Since frequent graph mining usually generates too many patterns, it is more appealing to mine closed frequent graphs only. Our major contribution is to tackle the connectivity constraint. We use the minimum cut criterion to measure the connectivity of a pattern and examine the issues of integrating the connectivity constraint with the closed graph mining process.

Copyright 2008 Praise Worthy Prize S.r.l. - All rights reserved


Keywords: Semistructured data, closed pattern, closed frequent graphs, connectivity.



go to top    Multi-Channel Framework for Web Application Servers: Expanding the Boundaries of Parallel Computing to Web Programming
        by K. Sobh, A. Sameh


Abstract - Web application servers are software engines that deliver applications to client computers and/or devices. They are distinguished from web servers by the extensive use of server-side dynamic content and frequent integration with database engines. In order to bring true performance (speedup and throughput) to Web application servers, we propose expanding the boundaries of parallel computing to the arena of Web Application development/programming. In this paper, we are bringing into the Web world ideas such a functional decomposition and data level parallelism from MPI; explicit concurrency directives from OpenMP, Divide-and-conquer Master-Slave and pipeline processing from parallel programming paradigms; Caching and Buffering from Parallel I/O; Fault Tolerance from PVM; Dynamic Load balancing through calibrators from Cluster and Grid Computing; and managing heterogeneity from Cluster and Grid Computing. Recently, clustering web application servers has become a reality, as demand for performance has increased, and back-end applications have become more complex. Over the past few years, recent efforts have been made on two fronts, mainly, clustering the web front-end based on web transaction dispatching such as those provided by Highly Available Load Balancing Apache Cluster, and distributing the back-end execution through the introduction of service based execution by using distributed execution mechanisms such as those provided by EJBs (Enterprise Java Beans), and Grid Services. Our proposed framework is mush more general than the above limited efforts. It allows brining many of the well tested parallel computing ideas into the Web Application World and still confirming to current Web standards. Parallel computing operates on the principle that large application can almost always be divided into smaller ones, which may be carried out concurrently. It exploits various levels of parallelisms at the task level, data level, and thread level. It has been used for many years, mainly in high performance computing, but interest in it has become greater in recent years due to physical constraints preventing frequency scaling. Parallel computing has recently become the dominant paradigm in computer architecture, mainly in the form of multicore and multithreaded processors. We argue that parallel computing will enjoy similar popularity when it extends its boundaries to the Web world. EJB testify to this but only within the Java environment.

Copyright 2008 Praise Worthy Prize S.r.l. - All rights reserved


Keywords: UPD, TCP, Container, HPA, Cluster, Web Programming.



go to top    Semantic Web vs. Traditional Web
        by M. Mahmoudi, M. Farhoodi, A. M. Z. Bidoki, M. Azadnia


Abstract - Due to the development of technology in recent decades, we have encountered unbelievably enormous amount of data. Regarding this situation, there is a vast demand for new technologies of document management. To tackle this problem, "Semantic Web" has been proposed as an extension of the traditional web to enable machines to intelligently process the web data. This technique would improve the understandability and interpretability of data for processing by machines.
The main aim of this paper is to provide a comprehensive comparison between different aspects of the traditional and semantic web. Here the architectural and navigational models of two kinds of web are subject of the assessment. We have also presented a detailed comparison of the content processors of these two kinds of web including modules such as crawler, indexer and ranker. Undoubtedly, these evaluations would help us to have better understandings about the web information systems and also promote us to solve current shortcomings.

Copyright 2008 Praise Worthy Prize S.r.l. - All rights reserved


Keywords: Crawling, Indexing, Ranking, Semantic web, Search, Traditional web.



go to top    WPR: A Weighted Approach to PageRank
        by P. Ghodsnia, A. M. Z. Bidoki, N. Yazdani


Abstract - IThe PageRank algorithm which is used by google as a successful ranking algorithm for ranking its results can be interpreted as leveraging the recommendations of all the other page creators on the web, about how important a page is. But it does not take advantage of the recommendations of page visitors. In PageRank, every page creator propagates the importance score of her/his page to its outgoing links uniformly. In this paper a revised version of PageRank called Weighted PageRank(WPR) is proposed in which this uniform propagation is transformed into a weighted one. These weights are assigned to outgoing links based on the average opinion of page visitors about the importance of pages. This opinion is recorded from search engine logs indicating which search results were clicked most. It is demonstrated that our approach is simply applicable without any significant extra time and storage costs compared to PageRank.

Copyright 2008 Praise Worthy Prize S.r.l. - All rights reserved


Keywords: PageRank, Search Engin, Ranking, Random Surfer Model.



go to top    Re-Engineering To Analyse OO Programming Techniques
        by S. S. Sachdeva, K. S. Kahlon, H. Singh


Abstract - The only documentation of a program is the code and the forgotten thoughts inside the head of the programmer who wrote the system many years ago. Reverse engineering tools are currently available for easing program comprehension and for creating documentation. Lack of documentation will drive you to develop a new system. New requirements motivates to rebuild the system. But rebuilding or redesigning the same system is expensive. So to overcome this problem, analysis of system is done. Analysis field is widening day by day due to which we need reusability not at code level but also at higher level. While concentrating on reverse engineering, efforts are made to analyse and model the OO files by designing the translator which help in better understanding of the program and its complexity.

Copyright 2008 Praise Worthy Prize S.r.l. - All rights reserved


Keywords: Translator, Static analysis, Dynamic analysis, Modeling.


go to top    A Computer Architecture Educational System based on a 32-bit RISC Processor
        by D. Mandalidis, P. Kenterlis, J. N. Ellinas


Abstract - This paper describes the implementation of a system-on-a-programmable-chip (SOPC) development board to support computer architecture laboratories at a low cost. A commercial field-programmable gate-array (FPGA) was employed to develop our reduced-instruction-set-computer (RISC) soft processor core that may be programmed through a user-friendly environment consisting of an assembler and a remote operation interface. Our approach aims to support a wide variety of student projects in our engineering curriculum, increase students productivity and decrease the development time. Through the proposed implementation, students are introduced to RISC architecture concepts, SOPC design and assembler structure. The reusability of the hardware permits flexible materialization of future projects to suit a variety of educational needs. The proposed inexpensive solution forms a complete educational environment suitable for undergraduate use.

Copyright 2008 Praise Worthy Prize S.r.l. - All rights reserved


Keywords: System-on-a-chip, system-on-a-programmable-chip, field-programmable-gate-array, processor core, reduced-instruction-set-computer.

Please send any questions about this web site to info@praiseworthyprize.it
Copyright 2005-2014 Praise Worthy Prize