International Science Index

International Journal of Computer and Information Engineering

1988
51125
Cultural References in Jean Francois Menards French Translation of Harry Potter a L'ecole Des Sorciers an Analysis of the Translated Catchphrases and Spells as well as Cultural Elements
Abstract:
The objective of this research project is to assess the ways in which Jean-Francois Menards French translation Harry Potter a l'ecole des sorciers translates the cultural references from the original text JK Rowlings' Harry Potter and the Philosophers Stone. The method of this analysis is to focus on analyzing the reasons for and the ways in which Menard translates the spells and catchphrases throughout the novel and the effects that these choices have on the reader. While at times Menard resorts to the omission or manipulation and borrowing he also contrasts these techniques by transferring the cultural references using the direct translational approach. It appears that the translator resorts to techniques other than direct translation when it is necessary to ensure that the target audience will understand the events and conversations taking place.
1987
59198
Effect of Linear Thermal Gradient on Steady-State Creep Behavior of Isotropic Rotating Disc
Abstract:
The present paper investigates the effect of linear thermal gradient on the steady-state creep behavior of rotating isotropic disc using threshold stress based Sherby's creep law. The composite discs made of aluminum matrix reinforced with silicon carbide particulate has been taken for analysis. The stress and strain rate distributions have been calculated for discs rotating at linear thermal gradation using von Mises' yield criterion. The material parameters have been estimated by regression fit of the available experimental data. The results are displayed and compared graphically in designer-friendly format for the above said temperature profile with the disc operating under uniform temperature profile. It is observed that radial and tangential stresses show minor variation and the strain rates vary significantly in the presence of thermal gradation as compared to disc having a uniform temperature.
1986
58309
Secure Hashing Algorithms and Advance Encryptions Algorithms in Cloud Computing
Authors:
Abstract:
Cloud computing on of the most sharp and important movement in various computing technologies. It provides flexibility to users, cost effectiveness, location independence, easy maintenance, enables multitenancy, drastic performance improvements, increased productivity. On the other hand, there are also major issues like security. Because of common server, security for cloud is major issue. It is important to provide security to user’s private data. It is really important in e-commerce and social networks. In this paper encryption algorithms such as Advanced Encryption Standard algorithms, their vulnerabilities, risk of attacks, optimal time and complexity management and comparison with other algorithms based on software implementation is proposed. Encryptions techniques to improve the performance of AES algorithms and to reduce risk management is given. Secure Hash Algorithms, its vulnerabilities, software implementations, risk of attacks and comparison with other hashing algorithms as well as advantage and disadvantage between hashing technique and encryption is given.
1985
59655
A Greedy Alignment Algorithm Supporting Medication Reconciliation
Abstract:
Reconciling patient medication lists from multiple sources is a critical task supporting the safe delivery of patient care. Manual reconciliation is a time-consuming and error-prone process, and recently attempts have been made to develop efficiency- and safety-oriented automated support for professionals performing the task. An important capability of any such support system is automated alignment – finding which medications from a list correspond to which medications from a different source, regardless of misspellings, naming differences (e.g. brand name vs. generic), or changes in treatment (e.g. switching a patient from one antidepressant class to another). This work describes a new algorithmic solution to this alignment task, using a greedy matching approach based on string similarity, edit distances, concept extraction and normalization, and synonym search derived from the RxNorm nomenclature. The accuracy of this algorithm was evaluated against a gold-standard corpus of 681 medication records; this evaluation found that the algorithm predicted alignments with 99% precision and 91% recall. This performance is sufficient to support decision support applications for medication reconciliation.
1984
59521
Decision Support for Dynamic Adaptation for Large Scale Distributed Systems
Abstract:
The ever growing demands from software area have led to the development of large scale distributed systems which bring together a wide pool of services and resources. Their composition and deployment come in different solutions tailored to users’ requests based on business models, functionality, quality of service, cost, and value. Bridging different parts into one software solution is brittle due to issues like heterogeneity, complexity, lack of transparency, network and communication failures, and misbehavior. The current paper proposes a decision-based solution for the dynamic adaptation part of a middleware which addresses the aforementioned problems for large scale distributed systems. The envisioned architecture is built on case-based reasoning principals and stands at the base of the adaptation processes that are imperative for ensuring the delivery of high quality software. The solution is further extended through ground models with a focus on reliability, availability of components, and failure tolerance in terms of abstract state machines. The novelty of the approach resides in making use of formal modeling for one of the emerging problems and introducing an adequate prototype, on top of which one can apply reasoning and verification methods.
1983
60277
Multi-Agent System for Irrigation Using Fuzzy Logic Algorithm and Open Platform Connectivity Data Access
Abstract:
In Canada, municipal water usage doubles in the summer months. These summer peaks place a lot of stress on the water supply system, and increases the cost of water because the capacity to deliver the required water levels for the summer months must be installed even though it is used for about a quarter of the year. Much of the summer peak demand is attributed to lawn and garden watering. Therefore, as water supplies diminish during periods of low rainfall, some municipalities may declare restrictions on lawn and garden watering. Automated irrigation systems conveniently protect landscape investment, and have the potential to optimize water usage. In fact, the new generation of irrigation systems are smart in the sense that they monitor the weather, soil conditions, evaporation and plant water use; then automatically adjust the irrigation schedule. In this paper we present an agent based smart irrigation system. The agents are built using a mix of Commercial Off-The Shelf (COTS) software, including Matlab, Microsoft Excel and KEPServer Ex5 OPC server; and custom written code. The Irrigation Schedule Agent uses fuzzy logic to integrate weather, soil condition, as well as plant water usage data to produce the irrigation schedule. In addition, the Multi-Agent system uses Open Platform Connectivity (OPC) technology to share data. OPC technology enables the Irrigation Schedule Agents of community participants to communicate over the Internet, making the system scalable to a municipal or regional agent based water monitoring, management, and optimization system. Finally, this paper presents simulation and pilot installation test result that show the operational effectiveness of our system.
1982
59831
Wavelet Coefficients Based on Orthogonal Matching Pursuit (OMP) Based Filtering for Remotely Sensed Images
Abstract:
In recent years, the technology of the remote sensing is growing rapidly. Image enhancement is one of most commonly used of image processing operations. Noise reduction plays very important role in digital image processing and various technologies have been located ahead to reduce the noise of the remote sensing images. The noise reduction using wavelet coefficients based on Orthogonal Matching Pursuit (OMP) has less consequences on the edges than available methods but this is not as establish in edge preservation techniques. So in this paper we provide a new technique minimum patch based noise reduction OMP which reduce the noise from an image and used edge preservation patch which preserve the edges of the image and presents the superior results than existing OMP technique. Experimental results show that the proposed minimum patch approach outperforms over existing techniques.
1981
57365
Reversible and Adaptive Watermarking for MRI Medical Images
Abstract:
A new medical image watermarking scheme delivering high embedding capacity is presented in this paper. Integer Wavelet Transform (IWT), Companding technique and adaptive thresholding are used in this scheme. The proposed scheme implants, recovers the hidden information and restores the input image to its pristine state at the receiving end. Magnetic Resonance Imaging (MRI) images are used for experimental purposes. The scheme first segment the MRI medical image into non-overlapping blocks and then inserts watermark into wavelet coefficients having a high frequency of each block. The scheme uses block-based watermarking adopting iterative optimization of threshold for companding in order to avoid the histogram pre and post processing. Results show that proposed scheme performs better than other reversible medical image watermarking schemes available in literature for MRI medical images.
1980
60538
Performance Comparison of Kotter and Kschischang Codes and Lifted Rank Metric Codes in Random Linear Network Coding
Abstract:
While linear network coding is a powerful technique which can be used to improve a network's throughput, it is highly susceptible to errors caused by various sources and leads to error propagation. This overwhelm the error correction capability of a classical error correction code. Recently, Kotter and Kschischang (KK) codes, have been proposed for error control in random linear network coding. these codes can also be constructed from the lifting of a rank metric codes (LRMC) as Gabidulin codes. In this paper, we give a performance comparison between the two codes using NCS-EC simulator. The synthesis results show that LRMC is less complex than KK code and give a better performance when conditional random linear network coding is used at intermediate nodes.
1979
60338
Fuzzy Availability Analysis of a Battery Production System
Abstract:
In today’s competitive market, there are many alternative products that can be used in similar manner and purpose. Therefore, the utility of the product is an important issue for the preferability of the brand. This utility could be measured in terms of its functionality, durability, reliability. These all are affected by the system capabilities. Reliability is an important system design criteria for the manufacturers to be able to have high availability. Availability is the probability that a system (or a component) is operating properly to its function at a specific point in time or a specific period of times. System availability provides valuable input to estimate the production rate for the company to realize the production plan. When considering only the corrective maintenance downtime of the system, mean time between failure (MTBF) and mean time to repair (MTTR) are used to obtain system availability. Also, the MTBF and MTTR values are important measures to improve system performance by adopting suitable maintenance strategies for reliability engineers and practitioners working in a system. Failure and repair time probability distributions of each component in the system should be known for the conventional availability analysis. However, generally, companies do not have statistics or quality control departments to store such a large amount of data. Real events or situations are defined deterministically instead of using stochastic data for the complete description of real systems. A fuzzy set is an alternative theory which is used to analyze the uncertainty and vagueness in real systems. The aim of this study is to present a novel approach to compute system availability using representation of MTBF and MTTR in fuzzy numbers. Based on the experience in the system, it is decided to choose 3 different spread of MTBF and MTTR such as 15%, 20% and 25% to obtain lower and upper limits of the fuzzy numbers. To the best of our knowledge, the proposed method is the first application that is used fuzzy MTBF and fuzzy MTTR for fuzzy system availability estimation. This method is easy to apply in any repairable production system by practitioners working in industry. It is provided that the reliability engineers/managers/practitioners could analyze the system performance in a more consistent and logical manner based on fuzzy availability. This paper presents a real case study of a repairable multi-stage production line in lead-acid battery production factory in Turkey. The following is focusing on the considered wet-charging battery process which has a higher production level than the other types of battery. In this system, system components could exist only in two states, working or failed, and it is assumed that when a component in the system fails, it becomes as good as new after repair. Instead of classical methods, using fuzzy set theory and obtaining intervals for these measures would be very useful for system managers, practitioners to analyze system qualifications to find better results for their working conditions. Thus, much more detailed information about system characteristics is obtained.
1978
59586
Improve Security Using Secure Servers Communicating via Internet with Stand-Alone Secure Software
Abstract:
This paper describes the use of the Internet as a feature to enhance the security of our software that is going to be distributed/sold to users potentially all over the world. By placing in a secure server some of the features of the secure software we increase the security of such software. The communication between the protected software and the secure server is done by a double lock algorithm. There is an analysis of intruders and possible responses to detected threats.
1977
59756
Using the Theory of Reasoned Action and Parental Mediation Theory to Examine Cyberbullying Perpetration Among Children and Adolescents
Authors:
Abstract:
The advancement and development of social media have inadvertently brought about a new form of bullying – cyberbullying – that transcends across physical boundaries of space. Although extensive research has been conducted in the field of cyberbullying, most of these studies have taken an overwhelmingly empirical angle. Theories guiding cyberbullying research are few. Furthermore, very few studies have explored the association between parental mediation and cyberbullying, with majority of existing studies focusing on cyberbullying victimization rather than perpetration. Therefore, this present study investigates cyberbullying perpetration from a theoretical angle, with a focus on the Theory of Reasoned Action and the Parental Mediation Theory. More specifically, this study examines the direct effects of attitude, subjective norms, descriptive norms, injunctive norms and active mediation and restrictive mediation on cyberbullying perpetration on social media among children and adolescents in Singapore. Furthermore, the moderating role of age on the relationship between parental mediation and cyberbullying perpetration on social media are examined. A self-administered paper-and-pencil nationally-representative survey was conducted. Multi-stage cluster random sampling was used to ensure that schools from all the four (North, South, East, and West) regions of Singapore were equally represented in the sample used for the survey. In all 607 upper primary school children (i.e., Primary 4 to 6 students) and 782 secondary school adolescents participated in our survey. The total average response rates were 69.6% for student participation. An ordinary least squares hierarchical regression analysis was conducted to test the hypotheses and research questions. The results revealed that attitude and subjective norms were positively associated with cyberbullying perpetration on social media. Descriptive norms and injunctive norms were not found to be significantly associated with cyberbullying perpetration. The results also showed that both parental mediation strategies were negatively associated with cyberbullying perpetration on social media. Age was a significant moderator of both parental mediation strategies and cyberbullying perpetration. The negative relationship between active mediation and cyberbullying perpetration was found to be greater in the case of children than adolescents. Children who received high restrictive parental mediation were less likely to perform cyberbullying behaviors, while adolescents who received high restrictive parental mediation were more likely to be engaged in cyberbullying perpetration. The study reveals that parents should apply active mediation and restrictive mediation in different ways for children and adolescents when trying to prevent cyberbullying perpetration. The effectiveness of active parental mediation for reducing cyberbullying perpetration was more in the case of children than for adolescents. Younger children were found to be more likely to respond more positively toward restrictive parental mediation strategies, but in the case of adolescents, overly restrictive control was found to increase cyberbullying perpetration. Adolescents exhibited less cyberbullying behaviors when under low restrictive strategies. Findings highlight that the Theory of Reasoned Action and Parental Mediation Theory are promising frameworks to apply in the examination of cyberbullying perpetration. The findings that different parental mediation strategies had differing effectiveness, based on the children’s age, bring about several practical implications that may benefit educators and parents when addressing their children’s online risk.
1976
58138
Analysis of Lightweight Register Hardware Threat
Abstract:
In this paper, we present a design methodology of lightweight register transfer level (RTL) hardware Trojan implemented based on a MAX II FPGA platform. The dynamic power consumed by the toggling of the various bit of registers as well as the dynamic power consumed per unit of logic circuits were analyzed. The hardware Trojan was designed taking advantage of the differences in dynamic power consumed per unit of logic circuits to hide the transfer information. The experiment result shows that the register hardware Trojan was successfully implemented by using different dynamic power consumed per unit of logic circuits to hide the key information of DES encryption module. It needs more than 100000 sample curves to reduce the background noise by comparing the sample space when it completely meets the time alignment requirement. Furthermore, this paper explains the importance of the presence or absence of external trigger signal in the detection of hardware Trojans.
1975
59660
Enterprise Information Portal Features: Results of Content Analysis Literature Review
Abstract:
Since their introduction in 1990’s, Enterprise Information Portals (EIPs) were investigated from different perspectives (e.g. project management, technology acceptance, IS success). However, no systematic literature review was produced to systematize both the research efforts and the technology itself. This paper reports first results of an extent systematic literature review study focused on research of EIPs and its categorization, specifically it reports a conceptual model of EIP features. The previous attempt to categorize EIP features was published in 2002. For the purpose of the literature review, content of 89 articles was analyzed in order to identify and categorize features of EIPs. The methodology of the literature review was as follows. Firstly, search queries in major indexing databases (Web of Science and SCOPUS) were used. The results of queries were analyzed according to their usability for the goal of the study. Then, full-texts were coded in Atlas.ti according to previously established coding scheme. The codes were categorized and the conceptual model of EIP features was created.
1974
56456
Big Data and Health: An Australian Perspective Which Highlights the Importance of Data Linkage to Support Health Research at a National Level
Abstract:
‘Big data’ is a relatively new concept that describes data so large and complex that it exceeds the storage or computing capacity of most systems to perform timely and accurate analyses. Health services generate large amounts of data from a wide variety of sources such as administrative records, electronic health records, health insurance claims, and even smart phone health applications. Health data is viewed in Australia and internationally as highly sensitive. Strict ethical requirements must be met for the use of health data to support health research. These requirements differ markedly from those imposed on data use from industry or other government sectors and may have the impact of reducing the capacity of health data to be incorporated into the real time demands of the Big Data environment. This ‘big data revolution’ is increasingly supported by national governments, who have invested significant funds into initiatives designed to develop and capitalize on big data and methods for data integration using record linkage. The benefits to health following research using linked administrative data are recognised internationally and by the Australian Government through the National Collaborative Research Infrastructure Strategy Roadmap, which outlined a multi-million dollar investment strategy to develop national record linkage capabilities. This led to the establishment of the Population Health Research Network (PHRN) to coordinate and champion this initiative. The purpose of the PHRN was to establish record linkage units in all Australian states, to support the implementation of secure data delivery and remote access laboratories for researchers, and to develop the Centre for Data Linkage for the linkage of national and cross-jurisdictional data. The Centre for Data Linkage has been established within Curtin University in Western Australia; it provides essential record linkage infrastructure necessary for large-scale, cross-jurisdictional linkage of health related data in Australia and uses a best practice ‘separation principle’ to support data privacy and security. Privacy preserving record linkage technology is also being developed to link records without the use of names to overcome important legal and privacy constraint. This paper will present the findings of the first ‘Proof of Concept’ project selected to demonstrate the effectiveness of increased record linkage capacity in supporting nationally significant health research. This project explored how cross-jurisdictional linkage can inform the nature and extent of cross-border hospital use and hospital-related deaths. The technical challenges associated with national record linkage, and the extent of cross-border population movements, were explored as part of this pioneering research project. Access to person-level data linked across jurisdictions identified geographical hot spots of cross border hospital use and hospital-related deaths in Australia. This has implications for planning of health service delivery and for longitudinal follow-up studies, particularly those involving mobile populations.
1973
57740
Improving Research by the Integration of a Collaborative Dimension in an Information Retrieval (IR) System
Abstract:
In computer science, the purpose of finding useful information is still one of the most active and important research topics. The most popular application of information retrieval (IR) are Search Engines, they meet users' specific needs and aim to locate the effective information in the web. However, these search engines have some limitations related to the relevancy of the results and the ease to explore those results. In this context, we proposed in previous works a Multi-Space Search Engine model that is based on a multidimensional interpretation universe. In the present paper, we integrate an additional dimension that allows to offer users new research experiences. The added component is based on creating user profiles and calculating the similarity between them that then allow the use of collaborative filtering in retrieving search results. To evaluate the effectiveness of the proposed model, a prototype is developed. The experiments showed that the additional dimension has improved the relevancy of results by predicting the interesting items of users based on their experiences and the experiences of other similar users. The offered personalization service allows users to approve the pertinent items, which allows to enrich their profiles and further improve research.
1972
56745
Fast Tumor Extraction Method Based on Nl-Means Filter and Expectation Maximization
Abstract:
The development of science has allowed computer scientists to touch the medicine and bring aid to radiologists as we are presenting it in our article. Our work focuses on the detection and localization of tumors areas in the human brain; this will be a completely automatic without any human intervention. In front of the huge volume of MRI to be treated per day, the radiologist can spend hours and hours providing a tremendous effort. This burden has become less heavy with the automation of this step. In this article we present an automatic and effective tumor detection, this work consists of two steps: the first is the image filtering using the filter Nl-means, then applying the expectation maximization algorithm (EM) for retrieving the tumor mask from the brain MRI and extracting the tumor area using the mask obtained from the second step. To prove the effectiveness of this method multiple evaluation criteria will be used, so that we can compare our method to frequently extraction methods used in the literature.
1971
59130
An Interpolation Tool for Data Transfer in Two-Dimensional Ice Accretion Problems
Abstract:
One of the difficulties in icing simulations is for extended periods of exposure, when very large ice shapes are created. As well as being large, they can have complex shapes, such as a double horn. For icing simulations, these configurations are currently computed in several steps. The icing step is stopped when the ice shapes become too large, at which point a new mesh has to be created to allow for further CFD and ice growth simulations to be performed. This can be very costly, and is a limiting factor in the simulations that can be performed. A way to avoid the costly human intervention in the re-meshing step of multistep icing computation is to use mesh deformation instead of re-meshing. The aim of the present work is to apply an interpolation method based on Radial Basis Functions (RBF) to transfer deformations from surface mesh to volume mesh. This deformation tool has been developed specifically for icing problems. It is able to deal with localized, sharp and large deformations, unlike the tools traditionally used for more smooth wing deformations. This tool will be presented along with validation on typical two-dimensional icing shapes.
1970
59137
An Interpolation Tool for Data Transfer in Three-Dimensional Ice Accretion Problems
Abstract:
In icing simulations very large and complex ice shapes are created. These configurations are currently computed in several steps stopping when the ice shapes become too large, at which point a new mesh has to be created to allow for further CFD and ice growth simulations to be performed. This can be very costly, and is a limiting factor in the simulations that can be performed. A way to avoid the cost of the re-meshing step in multistep icing computation is to use mesh deformation instead of re-meshing. The aim of the present work is to apply an interpolation method based on Radial Basis Functions (RBF) to transfer deformations from the surface ice-deformed mesh to the volume mesh. This tool is able to deal with localized, sharp and large deformations, preserving the quality of the original mesh. This tool will be presented along with validation on some three-dimensional icing shapes.
1969
59277
Building and Tree Detection Using Multiscale Matched Filtering
Abstract:
In this study, an automated building and tree detection method is proposed using DSM data and true orthophoto image. A multiscale matched filtering is used on DSM data. Therefore, first watershed transform is applied. Then, Otsu's thresholding method is used as an adaptive threshold to segment each watershed region. Detected objects are masked with NDVI to separate buildings and trees. The proposed method is able to detect buildings and trees without entering any elevation threshold. We tested our method on ISPRS semantic labeling dataset and obtained promising results.
1968
58323
Design of Two-Channel Quincunx Quadrature Mirror Filter Banks Using Digital All-Pass Lattice Filters
Abstract:
This paper deals with the problem of two-dimensional (2-D) recursive two-channel quincunx quadrature mirror filter (QQMF) banks design. The analysis and synthesis filters of the 2-D recursive QQMF bank are composed of two-dimensional (2-D) recursive digital allpass lattice filters (DALFs) with symmetric half-plane (SHP) support regions. Using the 2-D doubly complementary half-band (DC-HB) property possessed by the analysis and synthesis filters, we facilitate the design of the proposed QQMF bank. For finding the coefficients of the 2-D recursive SHP DALFs, We present a structure of 2-D recursive DC filters by using 2-D symmetric half-plane (SHP) recursive digital all-pass lattice filters (DALFs). The novelty of using 2-D SHP recursive DALFs to construct a 2-D recursive QQMF bank is that the resulting 2-D recursive QQMF bank provides better performance than the existing 2-D recursive QQMF banks. Moreover, the proposed structure possesses a favorable 2-D DC half-band (DC-HB) property that allows about half of the 2-D SHP recursive DALF’s coefficients to be zero. This leads to considerable savings in computational burden for implementation. Simulation results are also presented for illustration and comparison.
1967
60556
Alternator Fault Detection Using Wigner Ville Distribution
Abstract:
This paper describes two stages of learning-based fault detection procedure in alternators. The procedure covers three states of machine condition consisting of shortened brush and high impedance relay and the healthy condition in the alternator. The fault detection algorithm uses wigner-ville distribution as a feature extractor and also appropriate feature classifier. In this work, ANN (Artificial Neural Network) and also SVM (support vector machine) were compared to determine more suitable performance evaluated by the mean squared of errors criteria. Modules work together to detect possible faulty conditions of machines working. To test the method performance, a signal database is prepared by making different conditions on a laboratory setup. Therefore, it seems by implementing this method, satisfactory results are achieved.
1966
59072
An Eulerian Method for Fluid-Structure Interaction Simulation Applied to Wave Damping by Elastic Structures
Abstract:
A fully Eulerian method is developed to solve the problem of fluid-elastic structure interactions based on a 1-fluid method. The interface between the fluid and the elastic structure is captured by a level set function, advected by the fluid velocity and solved with a WENO 5 scheme. The elastic deformations are computed in an Eulerian framework thanks to the backward characteristics. We use the Neo Hookean or Mooney Rivlin hyperelastic models and the elastic forces are incorporated as a source term in the incompressible Navier-Stokes equations. The velocity/pressure coupling is solved with a pressure-correction method and the equations are discretized by finite volume schemes on a Cartesian grid. The main difficulty resides in that large deformations in the fluid cause numerical instabilities. In order to avoid these problems, we use a re-initialization process for the level set and linear extrapolation of the backward characteristics. First, we verify and validate our approach on several test cases, including the benchmark of FSI proposed by Turek. Next, we apply this method to study the wave damping phenomenon which is a mean to reduce the waves impact on the coastline. So far, to our knowledge, only simulations with rigid or one dimensional elastic structure has been studied in the literature. We propose to place elastic structures on the seabed and we present results where 50 % of waves energy is absorbed.
1965
58973
A Comparison of Sequential Quadratic Programming, Genetic Algorithm, Simulated Annealing, Particle Swarm Optimization for the Design and Optimization of a Beam Column
Authors:
Abstract:
This paper describes an integrated optimization technique with concurrent use of sequential quadratic programming, genetic algorithm, and simulated annealing particle swarm optimization for the design and optimization of a beam column. In this research, the comparison between 4 different types of optimization methods. The comparison is done and it is found out that all the methods meet the required constraints and the lowest value of the objective function is achieved by SQP, which was also the fastest optimizer to produce the results. SQP is a gradient based optimizer hence its results are usually the same after every run. The only thing which affects the results is the initial conditions given. The initial conditions given in the various test run were very large as compared. Hence, the value converged at a different point. Rest of the methods is a heuristic method which provides different values for different runs even if every parameter is kept constant.
1964
56778
Metamaterial Lenses for Microwave Cancer Hyperthermia Treatment
Abstract:
Nowadays, microwave hyperthermia is considered as an effective treatment for the malignant tumors. This microwave treatment which comes to substitute the chemotherapy and the surgical intervention enables an in-depth tumor heating without causing any diseases to the sane tissue. This technique requires a high precision system, in order to effectively concentrate the heating just in the tumor, without heating any surrounding healthy tissue. In the hyperthermia treatment, the temperature in cancerous area is typically raised up to over 42◦C and maintained for one hour in order to destroy the tumor sufficiently, whilst in the surrounding healthy tissues, the temperature is maintained below 42◦C to avoid any damage. Metamaterial lenses are widely used in medical applications like microwave hyperthermia treatment. They enabled a subdiffraction resolution thanks to the amplification of the evanescent waves and they can focus electromagnetic waves from a point source to a point image. Metasurfaces have been used to built metamaterial lenses. The main mechanical advantages of those structures over three dimensional material structures are ease of fabrication and a smaller required volume. Here in this work, we proposed a metasurface based lens operating at the frequency of 6 GHz and designed for microwave hyperthermia. This lens was applied and showed good results in focusing and heating the tumor inside a breast tissue with an increased and maintained temperature above 42°C. The tumor was placed in the focal distance of the lens so that only the tumor tissue will be heated. Finally, in this work, it has been shown that the hyperthermia area within the tissue can be carefully adjusted by moving the antennas or by changing the thickness of the metamaterial lenses based on the tumor position. Even though the simulations performed in this work have taken into account an ideal case, some real characteristics can be considered to improve the obtained results in a realistic model.
1963
59645
An Efficient Subcarrier Scheduling Algorithm for Downlink OFDMA-Based Wireless Broadband Networks
Abstract:
The growth of wireless technology made opportunistic scheduling a widespread theme in recent research. Providing high system throughput without reducing fairness allocation is becoming a very challenging task. A suitable policy for resource allocation among users is of crucial importance. This study focuses on scheduling multiple streaming flows on the downlink of a WiMAX system based on orthogonal frequency division multiple access (OFDMA). In this paper, we take the first step in formulating and analyzing this problem scrupulously. As a result, we proposed a new scheduling scheme based on Round Robin (RR) Algorithm. Because of its non-opportunistic process, RR does not take in account radio conditions and consequently it affect both system throughput and multi-users diversity. Our contribution called MORRA (Modified Round Robin Opportunistic Algorithm) consists to propose a solution to this issue. MORRA not only exploits the concept of opportunistic scheduler but also takes into account other parameters in the allocation process. The first parameter is called courtesy coefficient (CC) and the second is called Buffer Occupancy (BO). Performance evaluation shows that this well-balanced scheme outperforms both RR and MaxSNR schedulers and demonstrate that choosing between system throughput and fairness is not required.
1962
59624
Improved Pattern Matching Applied to Surface Mounting Devices Components Localization on Automated Optical Inspection
Abstract:
Automated Optical Inspection (AOI) Systems are commonly used on Printed Circuit Boards (PCB) manufacturing. The use of this technology has been proven as high efficiently for process improvements and quality achievements. The correct extraction of the component for posterior analysis is a critical step of the AOI process. Nowadays the pattern matching algorithm is commonly used, although this algorithm requires extensive calculations and time consumptions. This paper will present improved algorithm for component localization process, with capability of implementation in parallel execution system.
1961
58826
Design and Application of NFC-Based Identity and Access Management in the Cloud Services
Abstract:
In response to a changing world and the fast growth of the Internet, more and more enterprises are replacing web-based services with cloud-based ones. Multi-tenancy technology is getting more important especially with Software as a Service (SaaS). This, in turn, leads to a greater focus on the application of Identity and Access Management (IAM). Conventional Near-Field Communication (NFC) based verification relies on a computer browser and a card reader to access an NFC tag. This type of verification does not support mobile device login and user-based access management functions. This study designs an NFC-based third-party cloud identity and access management scheme (NFC-IAM) addressing this shortcoming. Data from simulation tests analyzed with Key Performance Indicators (KPIs) suggests NFC-IAM not only takes less time in identity identification but also cuts time by 80% in terms of two-factor authentication and improves verification accuracy to 99.9% or better. In functional performance analyses, NFC-IAM performed better in salability and portability. The NFC-IAM App (Application Software) and back-end system to be developed and deployed in the mobile device are to support identity and access management features and also offers users a more user-friendly experience and stronger security protection. In the future, our proposed NFC-IAM can be employed to different environments including identification for mobile payment systems, permission management for remote equipment monitoring, among other applications.
1960
53017
Honeypots and Honeynets: Concepts, Approaches, and Challenges
Abstract:
The early users of the Internet did not spend much time thinking about whether or not their online activities presented a threat to the network or to their own data. Today, the Internet is a very different network compared to its beginnings. More people rely on the network for their personal, financial, and business needs. Information security is a growing concern today for organizations and individuals alike. This has led to growing interest in more aggressive forms of defense to supplement the existing methods. Some of these methods involve the use of honeypots or honeynets. Honeynet is a form of high-interaction honeypot. Its aim is to gather extensive information on threats. A honeynet is an architecture, the two critical requirements for this architecture are data control and data capture. This paper presents an overview of honeynets and highlights different kinds of honeynets, honeynets concepts and approaches to their implementation. This paper serves as a starting point for individuals and organizations who are interested in this technology.
1959
59669
Towards a Robust Patch Based Multi-View Stereo Technique for Textureless and Occluded 3D Reconstruction
Abstract:
Patch based reconstruction methods have been and still are one of the top performing approaches to 3D reconstruction to date. Their local approach to refining the position and orientation of a patch, free of global minimisation and independent of surface smoothness, make patch based methods extremely powerful in recovering fine grained detail of an objects surface. However, patch based approaches still fail to faithfully reconstruct textureless or highly occluded surface regions thus though performing well under lab conditions, deteriorate in industrial or real world situations. They are also computationally expensive. Current patch based methods generate point clouds with holes in texturesless or occluded regions that require expensive energy minimisation techniques to fill and interpolate a high fidelity reconstruction. Such shortcomings hinder the adaptation of the methods for industrial applications where object surfaces are often highly textureless and the speed of reconstruction is an important factor. This paper presents on-going work towards a multi-resolution approach to address the problems, utilizing particle swarm optimisation to reconstruct high fidelity geometry, and increasing robustness to textureless features through an adapted approach to the normalised cross correlation. The work also aims to speed up the reconstruction using advances in GPU technologies and remove the need for costly initialization and expansion. Through the combination of these enhancements, it is the intention of this work to create denser patch clouds even in textureless regions within a reasonable time. Initial results show the potential of such an approach to construct denser point clouds with a comparable accuracy to that of the current top-performing algorithms.