International Science Index

International Journal of Computer and Information Engineering

Importance of Ethics in Cloud Security
This paper examines the importance of ethics in cloud computing. In the modern society, cloud computing is offering individuals and businesses an unlimited space for storing and processing data or information. Most of the data and information stored in the cloud by various users such as banks, doctors, architects, engineers, lawyers, consulting firms, and financial institutions among others require a high level of confidentiality and safeguard. Cloud computing offers centralized storage and processing of data, and this has immensely contributed to the growth of businesses and improved sharing of information over the internet. However, the accessibility and management of data and servers by a third party raise concerns regarding the privacy of clients’ information and the possible manipulations of the data by third parties. This document suggests the approaches various stakeholders should take to address various ethical issues involving cloud-computing services. Ethical education and training is key to all stakeholders involved in the handling of data and information stored or being processed in the cloud.
A Deterministic Approach for Solving the Hull and White Interest Rate Model with Jump Process
This work considers the resolution of the Hull and White interest rate model with the jump process. A deterministic process is adopted to model the random behavior of interest rate variation as deterministic perturbations, which is depending on the time t. The Brownian motion and jumps uncertainty are denoted as the integral functions piecewise constant function w(t) and point function θ(t). It shows that the interest rate function and the yield function of the Hull and White interest rate model with jump process can be obtained by solving a nonlinear semi-infinite programming problem. A relaxed cutting plane algorithm is then proposed for solving the resulting optimization problem. The method is calibrated for the U.S. treasury securities at 3-month data and is used to analyze several effects on interest rate prices, including interest rate variability, and the negative correlation between stock returns and interest rates. The numerical results illustrate that our approach essentially generates the yield functions with minimal fitting errors and small oscillation.
The Application of Neural Network in the Reworking of Accu-Check to Wrist Bands to Monitor Blood Glucose in the Human Body
The issue of high blood sugar level which its' effects might end up in Diabetes mellitus is now becoming a rampant cardiovascular disorder in our community. In recent time it might become a death traps and silent killers due to improper awareness of it within innocent people. The situation calls for urgency, hence the need to design a device that serves as a monitoring tool like a wrist watch to give an alert of the danger a head of time to the people living with high blood glucose, and to introduce a mechanism for checks and balances. The computational ability of neural network has established in this research using a neural Architecture of 8-15-9 configuration. The eight neurons at the input stage including a bias, with 15 neurons at the hidden layer at the processing stage and nine output symptoms cases. The inputs are formed using the exclusive OR (XOR), with the expectation of getting an XOR output as the threshold value for Diabetic symptom cases. The neural algorithm is coded in JAVA language with 1000 epoch runs to bring the errors into the barest minimum. The internal circuitry of the device comprises the compatible hardware requirement that matches the nature of each input neurons. The light emitting Diodes (LED) of Red, Green, and Yellow colors are used as the output for the neural network to show pattern recognition for severe case, pre-hypertensive cases and normal without the traces of Diabetes mellitus. The research concluded that neural network is efficient ACCU-Check design tool for proper monitoring of high glucose levels than the conventional methods of carrying out blood test.
Using Convolutional Neural Networks to Distinguish Different Sign Language Alphanumerics
Within the past decade, using Convolutional Neural Networks (CNN)’s to create Deep Learning systems capable of translating Sign Language into text has been a breakthrough in breaking the communication barrier for deaf-mute people. Conventional research on this subject has been concerned with training the network to recognize the fingerspelling gestures of a given language and produce their corresponding alphanumerics. One of the problems with the current developing technology is that images are scarce, with little variations in the gestures being presented to the recognition program, often skewed towards single skin tones and hand sizes that makes a percentage of the population’s fingerspelling harder to detect. Along with this, current gesture detection programs are only trained on one finger spelling language despite there being one hundred and forty-two known variants so far. All of this presents a limitation for traditional exploitation for the state of current technologies such as CNN’s, due to their large number of required parameters. This work aims to present a technology that aims to resolve this issue by combining a pretrained legacy AI system for a generic object recognition task with a corrector method to uptrain the legacy network. This is a computationally efficient procedure that does not require large volumes of data even when covering a broad range of sign languages such as American Sign Language, British Sign Language and Chinese Sign Language (Pinyin). Implementing recent results on method concentration, namely the stochastic separation theorem, an AI system is supposed as an operate mapping an input present in the set of images u ∈ U to an output that exists in a set of predicted class labels q ∈ Q of the alphanumeric that q represents and the language it comes from. These inputs and outputs, along with the interval variables z ∈ Z represent the system’s current state which implies a mapping that assigns an element x ∈ ℝⁿ to the triple (u, z, q). As all xi are i.i.d vectors drawn from a product mean distribution, over a period of time the AI generates a large set of measurements xi called S that are grouped into two categories: the correct predictions M and the incorrect predictions Y. Once the network has made its predictions, a corrector can then be applied through centering S and Y by subtracting their means. The data is then regularized by applying the Kaiser rule to the resulting eigenmatrix and then whitened before being split into pairwise, positively correlated clusters. Each of these clusters produces a unique hyperplane and if any element x falls outside the region bounded by these lines then it is reported as an error. As a result of this methodology, a self-correcting recognition process is created that can identify fingerspelling from a variety of sign language and successfully identify the corresponding alphanumeric and what language the gesture originates from which no other neural network has been able to replicate.
When There Is Too Much of a Good Thing: A Data-Driven Approach for Large-Scale Literature Review
The volume of available literature on any given scholarly topic is daunting. It can be impossible to manually read and meaningfully synthesize information when search results uncover tens of thousands of possibly relevant articles. Recent advances in citation network analysis and text mining techniques provide new opportunities for constructing robust summaries of large bodies of literature via a purely data-driven approach. Here, we propose a novel combination of two established techniques - citation network analysis followed by latent semantic analysis – to allow data-driven summaries of the literature. Citation network analysis extracts clusters formed by groups of publications connected by mutual citation, weighting by date to account for the greater likelihood of older publications being cited more. The resultant clusters can be taken as indicative of theoretical or conceptual groupings the literature, typically reflecting different historical approaches (e.g., biological vs. sociological mindsets). Text mining of the titles, abstracts, or article contents of these publications using word frequency and nearest neighbor techniques can be used to generate simple keyword summaries that encapsulate the content of these clusters. This talk will walk through two examples of how this approach can be used to summarize large bodies of literature (11,000+ scholarly works). We begin with a Web of Science Core Collection Database search for word stems of the key terms of interest (e.g., supervis* for ‘supervisor,’ ‘supervision,’ ‘supervised’). Full records of titles, year of publications, and citations of each article (including secondary articles, those citing and cited by documents including search term topics) are imported into CiteNetExplorer for cluster analysis. The plain text of the titles, abstracts, or full texts is imported into R (language for statistical programming). A corpus for each cluster is created from the imported text by the removal of numbers and non-ASCII characters, stop words (‘and’, ‘or’ etc.) and reduction to word stems. Word frequency matrices are used to identify the most frequent words unique to each cluster, which can be taken as a summary of their unique conceptual focus. Further, latent semantic analysis can then be used to identify the ways in which the key term is conceptually embedded within each cluster. Briefly, an n-dimensional latent semantic space is constructed via single value decomposition of the corpus. The nearest neighbors (largest cosine within this semantic space) for the key term can be taken as the conceptual context of the topic within each cluster. We show that the combination of citation network analysis and latent semantic analysis is a generalizable, systematic and informative approach to summarizing large bodies of scholarly literature, which can be achieved using a pipeline of open-source and freely available software.
The Many Faces of Inspiration: A Study on Socio-Cultural Influences in Design
The creative journey in design often starts with a spark of inspiration, the source of which can be from myriad stimuli- nature, poetry, personal experiences or even fleeting thoughts and images. While it is indeed an important source of creative exploration, interpretation of this inspiration may often times be influenced by demographic and psychographic variables of the creator - Age, gender, lifecycle stage, personal experiences and individual personality traits being some of these factors. Common sources of inspiration can thus be interpreted differently, translating to different elements of design, and using varied principles in their execution. Do such variables in the creator influence the nature of the creative output? If yes, what are the visible matrices in the output which can be differentiated? An observational study with two groups of Design students, studying in the same design institute, under the guidance of the same design mentor, was conducted to map this influence. Both the groups were unaware of each other but worked with a common source of inspiration as provided by the instructor. In order to maintain congruence, both the groups were provided with lyrical compositions from well-known ballads and poetry as the source of their inspiration. The outputs were abstract renditions using lines, colors and shapes; and these were analyzed under matrices for the elements and principles used to create the compositions. The study indicated that there was a demarcation in terms of the choice of lines, colors and shapes chosen to create the composition, between both groups. The groups also tended to use repetition, proportion and emphasis differently; giving rise to varied uses of the Design principles. The study threw interesting observations on how Design interpretation can vary for the same source of inspiration, based on demographic and psychographic variances. The implications can be traced not just to the process of creative design, but also to the deep social roots that bind creative thinking and Design ideation; which can provide an interesting commentary between different cohorts on what constitutes ‘Good Design’.
Modes of Seeing in Interactive Exhibitions: A Study on How Technology Can Affect the Viewer and Transform the Exhibition Spaces
The current art exhibit scenario presents a multitude of visualization features deployed in experiences that instigate a process of art production and design. The exhibition design through multimedia devices - from the audiovisual to the touch screen - has become a medium from which art can be understood and contemplated. Artistic practices articulated, during the modern period, the spectator's perception in the exhibition space, often challenging the architecture of museums and galleries. In turn, the museum institution seeks to respond to the challenge of welcoming the viewer whose experience is mediated by technological artifacts. When the beholder, together with the technology, interacts with the exhibition space, important displacements happen. In this work, we will analyze the migrations of the exhibition space to the digital environment through mobile devices triggered by the viewer. Based not on technological determinism, but on the conditions of the appearance of this spectator, this work is developed, with the aim of apprehending the way in which technology demarcates the differences between what the spectator was and what becomes in the contemporary atmosphere of the museums and galleries. These notions, we believe, will contribute to the formation of an exhibition design space in conformity with this participant.
Participatory and Experience Design in Advertising: An Exploratory Study of Advertising Styles of Cultures
Advertising today has become an indispensable phenomenon both for businesses and consumers. Due to the conditions of rapid changes in the market and growth of competitiveness, the success of many of firms that produce similar merchandise depends largely on how professionally and effective they use marketing communication elements which also must have some sense of shared values between the message provider and the receiver within cultural and global trend. This paper demonstrates how consumer behaviour and communication through cultural values evaluate advertising styles. Using samples of award-winning ads from both author's and other professional's creative works, the study reveals a significant correlation between the cultural elements and advertisement reception for language and cultural norms respectively. The findings of this study draw attention to the change of communication in the beginning of the 21st century which has shaped a new style of Participatory and Experience Design in advertising.
Anti-Forensic Countermeasure: An Examination and Analysis Extended Procedure for Information Hiding of Android SMS Encryption Applications
Empowerment of smartphone technology is growing very rapidly in various fields of science. One of the mobile operating systems that dominate the smartphone market today is Android by Google. Unfortunately, the expansion of mobile technology is misused by criminals to hide the information that they store or exchange with each other. It makes law enforcement more difficult to prove crimes committed in the judicial process (anti-forensic). One of technique that used to hide the information is encryption, such as the usages of SMS encryption applications. A Mobile Forensic Examiner or an investigator should prepare a countermeasure technique if he finds such things during the investigation process. This paper will discuss an extension procedure if the investigator found unreadable SMS in android evidence because of encryption. To define the extended procedure, we create and analyzing a dataset of android SMS encryption application. The dataset was grouped by application characteristics related to communication permissions, as well as the availability of source code and the documentation of encryption scheme. Permissions indicate the possibility of how applications exchange the data and keys. Availability of the source code and the encryption scheme documentation can show what the cryptographic algorithm specification is used, how long the key length, how the process of key generation, key exchanges, encryption/decryption is done, and other related information. The output of this paper is an extended or alternative procedure for examination and analysis process of android digital forensic. It can be used to help the investigators while they got a confused cause of SMS encryption during examining and analyzing. What steps should the investigator take, so they still have a chance to discover the encrypted SMS in android evidence?
Searching Forensic Evidence in Compromised Virtual Web Server Against Structure Query Language Injection Attacks and PHP Web Shell
SQL injection is one of the most common types of attacks and has a very critical impact on web servers. In the worst case, an attacker can perform post-exploitation after a successful SQL injection attack. In the case of forensics web servers, web server analysis is closely related to log file analysis. But sometimes large file sizes and different log types make it difficult for investigators to look for traces of attackers on the server. The purpose of this paper is to help investigator take appropriate steps to investigate when the web server gets attacked. We use attack scenarios using SQL injection attacks including PHP backdoor injection as post-exploitation. We perform post-mortem analysis of web server logs based on Hypertext Transfer Protocol (HTTP) POST and HTTP GET method approaches that are characteristic of SQL injection attacks. In addition, we also propose structured analysis method between the web server application log file, database application, and other additional logs that exist on the webserver. This method makes the investigator more structured to analyze the log file so as to produce evidence of attack with acceptable time. There is also the possibility of other attack techniques can be detected with this method. On the other side, it can help web administrators to prepare their systems for the forensic readiness.
Rapid Evidence Remote Acquisition in High-Availability Server and Storage System for Digital Forensic to Unravel Academic Crime
Nowadays, digital system including, but not limited to, computer and internet have penetrated the education system widely. Critical information such as students’ academic records is stored in a server off- or on-campus. Although several countermeasures have been taken to protect the vital resources from outsider attack, the defense from insiders threat is not getting serious attention. At the end of 2017, a security incident that involved academic information system in one of the most respected universities in Indonesia affected not only the reputation of the institution and its academia but also academic integrity in Indonesia. In this paper, we will explain our efforts in investigating this security incident where we have implemented a novel rapid evidence remote acquisition method in high-availability server and storage system thus our data collection efforts do not disrupt the academic information system and can be conducted remotely minutes after incident report has been received. The acquired evidence is analyzed during digital forensic by constructing the model of the system in an isolated environment which allows multiple investigators to work together. In the end, the suspect is identified as a student (insider), and the investigation result is used by prosecutors to charge the suspect as an academic crime.
Control Performance Simulation and Analysis for Microgravity Vibration Isolation System Onboard Chinese Space Station
Microgravity Science Experiment Rack (MSER) will be onboard TianHe (TH) spacecraft planned to be launched in 2018. TH is one module of Chinese Space Station. Microgravity Vibration Isolation System (MVIS), which is MSER’s core part, is used to isolate disturbance from TH and provide high-level microgravity for science experiment payload. MVIS is two stage vibration isolation system, consisting of Follow Unit (FU) and Experiment Support Unit (ESU). FU is linked to MSER by umbilical cables, and ESU suspends within FU and without physical connection. The FU’s position and attitude relative to TH is measured by binocular vision measuring system, and the acceleration and angular velocity is measured by accelerometers and gyroscopes. Air-jet thrusters are used to generate force and moment to control FU’s motion. Measurement module on ESU contains a set of Position-Sense-Detectors (PSD) sensing the ESU’s position and attitude relative to FU, accelerometers and gyroscopes sensing ESU’s acceleration and angular velocity. Electro-magnetic actuators are used to control ESU’s motion. Firstly, the linearized equations of FU’s motion relative to TH and ESU’s motion relative to FU are derived, laying the foundation for control system design and simulation analysis. Subsequently, two control schemes are proposed. One control scheme is that ESU tracks FU and FU tracks TH, shorten as E-F-T. The other one is that FU tracks ESU and ESU tracks TH, shorten as F-E-T. In addition, motion spaces are constrained within ±15 mm、±2° between FU and ESU, and within ±300 mm between FU and TH or between ESU and TH. A Proportional-Integrate-Differentiate (PID) controller is designed to control FU’s position and attitude. ESU’s controller includes an acceleration feedback loop and a relative position feedback loop. A Proportional-Integrate (PI) controller is designed in the acceleration feedback loop to reduce the ESU’s acceleration level, and a PID controller in the relative position feedback loop is used to avoid collision. Finally, simulations of E-F-T and F-E-T are performed considering variety uncertainties, disturbances and motion space constrains. The simulation results of E-T-H showed that control performance was from 0 to -20 dB for vibration frequency from 0.01 to 0.1 Hz, and vibration was attenuated 40 dB per ten octave above 0.1Hz. The simulation results of T-E-H showed that vibration was attenuated 20 dB per ten octave at the beginning of 0.01Hz.
Shifting from Information Security towards Cybersecurity Paradigm
The richness of terminologies in the field of Information Technology (IT), Information Security (IS) or cyber-related risks is advantageous. However, when the theoretical legacy varies between different definitions and meanings for the same term, it creates confusion and loses value. This paper introduces the inconsistency and problems when overcoming discrepancies in cybersecurity terminology; inconsistency and problems that lead to misunderstanding and ill-use. This in turn renders insecurity for organisations. Derailing from true meaning affects the type of control in place and directly impacts on cybersecurity scope, derivations, meanings and respectively, implementation. The authors of this paper argue that the use of a unified terminology fosters additional value proposition and empowerment to enhanced risk oversight (countermeasures and safeguards). Therefore, as a response to the ensuing confusion, this paper explores the determinants of interchangeable terminology and meanings, and it analyses the nature of theoretical legacy as well as the internal and external organisational determinant factors. It does so with the purpose of identifying and clarifying the cause and effects of the confusion. The paper is mainly based on a qualitative analysis.
Online Information Seeking: A Review of the Literature in the Health Domain
The development of the information technology and Internet has been transforming the healthcare industry. The internet is continuously accessed to seek for health information and there are variety of sources, including search engines, health websites, and social networking sites. Providing more and better information on health may empower individuals, however, ensuring a high quality and trusted health information could pose a challenge. Moreover, there is an ever-increasing amount of information available, but they are not necessarily accurate and up to date. Thus, this paper aims to provide an insight of the models and frameworks related to online health information seeking of consumers. It begins by exploring the definition of information behavior and information seeking to provide a better understanding of the concept of information seeking. In this study, critical factors such as performance expectancy, effort expectancy, and social influence will be studied in relation to the value of seeking health information. It also aims to analyze the effect of age, gender, and health status as the moderator on the factors that influence online health information seeking i.e. trust and information quality. A preliminary survey will be carried out among the health professionals to clarify the research problems exist in the real world, at the same time producing a conceptual framework. A final survey will be distributed to five states of Malaysia, to solicit the feedback on the framework. Data will be analyzed using SPSS and SmartPLS 3.0 analysis tools. It is hoped that at the end of this study, a novel framework that can improve online health information seeking is developed. Finally, this paper concludes with some suggestions on the models and frameworks that could improve online health information seeking.
Bag of Local Features for Person Re-Identification on Large-Scale Datasets
In the last few years, large-scale person re-identification has attracted a lot of attention from video surveillance since it has a potential application prospect in public safety management. However, it is still a challenging job considering the variation in human pose, the changing illumination conditions and the lack of paired samples. Although the accuracy has been significantly improved, the data dependence of the sample training is serious. To tackle this problem, a new strategy is proposed based on bag of visual words (BoVW) model of designing the feature representation which has been widely used in the field of image retrieval. The local features are extracted, and more discriminative feature representation is obtained by cross-view dictionary learning (CDL), then the assignment map is obtained through k-means clustering. Finally, the BoVW histograms are formed which encodes the images with the statistics of the feature classes in the assignment map. Experiments conducted on the CUHK03, Market1501 and MARS datasets show that the proposed method performs favorably against existing approaches.
Multiple Images Stitching Based on Gradually Changing Matrix
Image stitching is a very important branch in the field of computer vision, especially for panoramic map. In order to eliminate shape distortion, a novel stitching method is proposed based on gradually changing matrix when images are horizontal. For images captured horizontally, this paper assumes that there is only translational operation in image stitching. By analyzing each parameter of the homography matrix, the global homography matrix is gradually transferred to translation matrix so as to eliminate the effects of scaling, rotation, etc. in the image transformation. This paper adopts matrix approximation to get the minimum value of the energy function so that the shape distortion at those regions corresponding to the homography can be minimized. The proposed method can avoid multiple horizontal images stitching failure caused by accumulated shape distortion. At the same time, it can be combined with As-Projective-As-Possible algorithm to ensure precise alignment of overlapping area.
Long-Term Tracking Algorithm with Selected Deep Features and Single Shot MultiBox Detector
In recent year, correlation filtering based algorithms have achieved significant performance in tracking. In traditional, the previous frame has been trained in order to get the prediction position of the next frame, and then the features are extracted from the current target. However, we find that if the present frame has become drifted in the tracking process, the succeeding frame is subjected to larger offset errors, which may eventually lead to the loss of tracking target, and reduce the performance of accuracy and stability. In order to enforce high accuracy of tracking results, we add deep learning into our tracking model. First, we choose deep feature as the feature of the correlation filter. Considering that the dimension of the deep feature is too high, we use a sparse representation method to filter deep features, which improves the accuracy and running speed of the algorithm. We use Siamese network to judge the similarities between the target position and the template and determine the confidence according to the similarities. Then, Single Shot Multi-Box Detector (SSD) is to start detecting the current frame once the current confident value is over a threshold to update the tracking model. In this way, once the drift occurs, the detection algorithm realizes the starting function to obtain better stability and accuracy. The experimental results demonstrate the proposed approach outperforms state-of-the-art approaches on large-scale benchmark datasets.
Jordan Water District Interactive Billing and Accounting Information System
The Jordan Water District Interactive Billing and Accounting Information Systems is designed for Jordan Water District to uplift the efficiency and effectiveness of its services to its customers. It is designed to process computations of water bills in accurate and fast way through automating the manual process and ensures that correct rates and fees are applied. In addition to billing process, a mobile app will be integrated into it to support rapid and accurate water bill generation. An interactive feature will be incorporated to support electronic billing to customers who wish to receive water bills through the use of electronic mail. The system will also improve, organize and avoid data inaccuracy in accounting processes because data will be stored in a database which is designed logically correct through normalization. Furthermore, strict programming constraints will be plunged to validate account access privilege based on job function and data being stored and retrieved to ensure data security, reliability, and accuracy. The system will be able to cater the billing and accounting services of Jordan Water District resulting in setting forth the manual process and adapt to the modern technological innovations.
Analyzing the Performance of Data Partitioning in Real-Time Spatial Big Data: The Implementation of Matching Algorithm for Vertical Partitioning
In recent years, real-time spatial applications, like location-aware services and traffic monitoring, have become more and more important. Such applications result dynamic environments where data as well as queries are continuously moving. As a result, there is a tremendous amount of real-time spatial data generated every day. The growth of the data volume seems to outspeed the advance of our computing infrastructure. For instance, in real-time spatial Big Data, users expect to receive the results of each query within a short time period without holding in account the load of the system. But with a huge amount of real-time spatial data generated, the system performance degrades rapidly especially in overload situations. To solve this problem, we propose the use of data partitioning as an optimization technique. Traditional horizontal and vertical partitioning can increase the performance of the system and simplify data management. But they remain insufficient for real-time spatial Big data; they can’t deal with real-time and stream queries efficiently. Thus, in this paper, we propose a novel data partitioning approach for real-time spatial Big data named VPA-RTSBD (Vertical Partitioning Approach for Real-Time Spatial Big data). This contribution is an implementation of the Matching algorithm for traditional vertical partitioning. We find, firstly, the optimal attribute sequence by the use of Matching algorithm. Then, we propose a new cost model used for database partitioning, for keeping the data amount of each partition more balanced limit and for providing a parallel execution guarantees for the most frequent queries. VPA-RTSBD aims to obtain a realtime partitioning scheme and deals with stream data. It improves the performance of query execution by maximizing the degree of parallel execution. This affects QoS (Quality Of Service) improvement in real-time spatial Big Data especially with a huge volume of stream data. The performance of our contribution is evaluated via simulation experiments. The results show that the proposed algorithm is both efficient and scalable, and that it outperforms comparable algorithms.
Inner Attention Based Bi-Long-Short Term Memories with Indexing for Non-Factoid Question Answering
The paper focuses on the solving the problem of non-factoid question answering which is an important task and has applications in knowledge base construction and information extraction. We have tried to overcome the challenges in non- factoid question answering using a combination of different deep learning models. The combination of LSTMs with other deep learning models has proved very useful in the task of answering non factoid based questions. In this paper, we extend the deep learning model based on bidirectional LSTMs in two directions with different neural network models. In one direction we combined convolutional neural network with basic LSTM for more composite question answer embedding, and in other direction, we applied an inner attention mechanism, proposed by Bingning Wang, et al., to the LSTMs. We also used an information retrieval model along with these models to generate answers. Our approach showed an improvement in accuracy over baselines and the referred model in general and also with respect to the length of the answers in the datasets used.
Fuzzy Logic Based Intrusion Detection Systems as a Service for Malicious Port Scanning Traffic Detection
Port scanning is a cyber-network attack allows cyber terrorists to gather valuable information about target hosts namely defense, governmental and banks servers by trying to identify instantly open ports, which correspond to specific services on the cloud, such as HTTP, DNS, and email. The basic role of Intrusion Detection Systems (IDSs) is to monitor networks and systems for malicious activities, policy violations attacks, and unauthorized information gathering activities. In this paper, we proposed a TCP port scanning detection framework, based on the fuzzy logic controller, which uses fuzzy rules base and the Mamdani inference method. The proposed platform is a Fuzzy IDS as a Service, which enables network administrators and cybersecurity specialists to follow in real time the network traffic behavior, i.e., the Port Scanning Criticality Level (PSCL). A SaaS dynamic dashboard is implemented to quickly and efficiently identify malicious port scanning activities. Experimentations and evaluations showed the efficiency of the proposed system in multilevel port scanning detection compared to Snort and the related IDS systems.
Voice Controlled Robotic Manipulator with Decision Tree and Contour Identification Techniques
Robotic manipulators are widely employed for a number of industrial and commercial purposes today. Most of these manipulators are preprogrammed with specific instructions that they follow precisely. With such forms of control, the adaptability of the manipulators to dynamic environments as well as new tasks is reduced. In order to increase the flexibility and adaptability of usage of robotic manipulators, alternative control mechanisms need to be pursued. In this paper we present one such mechanism, using a speech-based control system along with machine perception. The manipulator is also equipped with a vision system that allows it to recognize the required objects in the environment. Voice commands issued as plain English statements by the user are converted into precise instructions after incorporating the environment sensed by the robot executed by the manipulator. This combination of speech and vision systems allows the manipulator to adapt itself to changing tasks as well as environments without any reprogramming.
Improved Multi-Channel Separation Algorithm for Satellite-Based Automatic Identification System Signals Based on Artificial Bee Colony and Adaptive Moment Estimation
The applications of satellite-based automatic identification system (S-AIS) pave the road for wide-range maritime traffic monitoring and management. But the coverage of satellite’s view includes multiple AIS self-organizing networks, which leads to the collision of AIS signals from different cells. The contribution of this work is to propose an improved multi-channel blind source separation algorithm based on Artificial Bee Colony (ABC) and advanced stochastic optimization to perform separation of the mixed AIS signals. The proposed approach adopts modified ABC algorithm to get an optimized initial separating matrix, which can expedite the initialization bias correction, and utilizes the Adaptive Moment Estimation (Adam) to update the separating matrix by adjusting the learning rate for each parameter dynamically. Simulation results show that the algorithm can speed up convergence and lead to better performance in separation accuracy.
Joint Code Acquisition and Doppler Shift Estimation Method for Direct Sequence Spread Spectrum-Minimum Shift Keying Signal
Acquisition of Direct Sequence Spread Spectrum-Minimum Shift Keying (DSSS-MSK) signal in low signal to noise (SNR) and high dynamic environment will impact the overall performance of the receiving system seriously. The proposed all-digital IF receiver has a serial structure, transforming the DSSS-MSK signal into approximating DSSS-BPSK signal using the matched filter. The matched filter is designed according to the known frequency response based on convex optimization. Then, the signals are regrouped by spreading code period. Finally, combining Doppler frequency shift compensation with the parallel code acquisition algorithm based on FFT, the PN code phase difference and Doppler frequency shift are captured simultaneously. Simulation results show that the proposed algorithm has 7dB and 8dB SNR improvement than delay correlation method and ML-FFT method respectively. Furthermore, the proposed algorithm has quick acquisition rate, wide acquisition range, high acquisition accuracy, low complexity and is suitable for low SNR environment.
Green Computing: Awareness and Practice in a University Information Technology Department
The fact that ICTs is pervasive in today’s society paradoxically also calls for the need for green computing. Green computing generally encompasses the study and practice of using Information and Communication Technology (ICT) resources effectively and efficiently without negatively affecting the environment. Since the emergence of this innovation, manufacturers and governmental bodies such as Energy Star and the United State of America’s government have obviously invested many resources in ensuring the reality of green design, manufacture, and disposal of ICTs. However, the level of adherence to green use of ICTs among users have been less accounted for especially in developing ICT consuming nations. This paper, therefore, focuses on examining the awareness and practice of green computing among academics and students of the Information Technology Department of Durban University of Technology, Durban South Africa, in the context of green use of ICTs. This was achieved through a survey that involved the use of a questionnaire with four sections: (a) demography of respondents, (b) Awareness of green computing, (c) practices of green computing, and (d) attitude towards greener computing. One hundred and fifty (150) questionnaires were distributed, one hundred and twenty (125) were completed and collected for data analysis. Out of the one hundred and twenty-five (125) respondents, twenty-five percent (25%) were academics while the remaining seventy-five percent (75%) were students. The result showed a higher level of awareness of green computing among academics when compared to the students. Green computing practices are also shown to be highly adhered to among academics only. However, interestingly, the students were found to be more enthusiastic towards greener computing in the future. The study, therefore, suggests that the awareness of green computing should be further strengthened among students from the curriculum point of view in order to improve on the greener use of ICTs in universities especially in developing countries.
Automatic Registration of Rail Profile Based Local Maximum Curvature Entropy
On the influence of train vibration and environmental noise on the measurement of track wear, we proposed a method for automatic extraction of circular arc on the inner or outer side of the rail waist and achieved the high-precision registration of rail profile. Firstly, a polynomial fitting method based on truncated residual histogram was proposed to find the optimal fitting curve of the profile and reduce the influence of noise on profile curve fitting. Then, based on the curvature distribution characteristics of the fitting curve, the interval search algorithm based on dynamic window’s maximum curvature entropy was proposed to realize the automatic segmentation of small circular arc. At last, we fit two circle centers as matching reference points based on small circular arcs on both sides and realized the alignment from the measured profile to the standard designed profile. The static experimental results show that the mean and standard deviation of the method are controlled within 0.01mm with small measurement errors and high repeatability. The dynamic test also verified the repeatability of the method in the train-running environment, and the dynamic measurement deviation of rail wear is within 0.2mm with high repeatability.
Off-Grid Sparse Inverse Synthetic Aperture Imaging by Basis Shift Algorithm
In this paper, a new and robust algorithm is proposed to achieve high resolution for inverse synthetic aperture radar (ISAR) imaging in the compressive sensing (CS) framework. Traditional CS based methods have to assume that unknown scatters exactly lie on the pre-divided grids; otherwise, their reconstruction performance dropped significantly. In this processing algorithm, several basis shifts are utilized to achieve the same effect as grid refinement does. The detailed implementation of the basis shift algorithm is presented in this paper. From the simulation we can see that using the basis shift algorithm, imaging precision can be improved. The effectiveness and feasibility of the proposed method are investigated by the simulation results.
Development a Forecasting System and Reliable Sensors for River Bed Degradation and Bridge Pier Scouring
In recent years, climate change is a major factor to increase rainfall intensity and extreme rainfall frequency. The increased rainfall intensity and extreme rainfall frequency will increase the probability of flash flood with abundant sediment transport in a river basin. The floods caused by heavy rainfall may cause damages to the bridge, embankment, hydraulic works, and the other disasters. Therefore, the foundation scouring of bridge pier, embankment and spur dike caused by floods has been a severe problem in the worldwide. This severe problem has happened in many East Asian countries such as Taiwan and Japan because of these areas are suffered in typhoons, earthquakes, and flood events every year. Results from the complex interaction between fluid flow patterns caused by hydraulic works and the sediment transportation leading to the formation of river morphology, it is extremely difficult to develop a reliable and durable sensor to measure river bed degradation and bridge pier scouring. Therefore, an innovative scour monitoring sensor using vibration-based Micro-Electro Mechanical Systems (MEMS) was developed. This vibration-based MEMS sensor was packaged inside a stainless sphere with the proper protection of the full-filled resin, which can measure free vibration signals to detect scouring/deposition processes at the bridge pier. In addition, a friendly operational system includes rainfall runoff model, one-dimensional and two-dimensional numerical model, and the applicability of sediment transport equation and local scour formulas of bridge pier are included in this research. The friendly operational system carries out the simulation results of flood events that includes the elevation changes of river bed erosion near the specified bridge pier and the erosion depth around bridge piers. In addition, the system is developed with easy operation and integrated interface, the system can supplies users to calibrate and verify numerical model and display simulation results through the interface comparing to the scour monitoring sensors. To achieve the forecast of the erosion depth of river bed and main bridge pier in the study area, the system also connects the rainfall forecast data from Taiwan Typhoon and Flood Research Institute. The results can be provided available information for the management unit of river and bridge engineering in advance.
Hierarchical Tree Long Short-Term Memory for Sentence Representations
A fixed-length feature vector is required for many machine learning algorithms in NLP field. Word embeddings have been very successful at learning lexical information. However, they cannot capture the compositional meaning of sentences, which prevents them from a deeper understanding of language. In this paper, we introduce a novel hierarchical tree long short-term memory (HTLSTM) model that learns vector representations for sentences of arbitrary syntactic type and length. We propose to split one sentence into three hierarchies: short phrase, long phrase and full sentence level. The HTLSTM model gives our algorithm the potential to fully consider the hierarchical information and long-term dependencies of language. We design the experiments on both English and Chinese corpus to evaluate our model on sentiment analysis task. And the results show that our model outperforms several existing state of the art approaches significantly.
Text Localization in Fixed-Layout Documents Using Convolutional Networks in a Coarse-to-Fine Manner
Text contained within fixed-layout documents can be of great semantic value and so requires a high localization accuracy, such as ID cards, invoices, cheques, and passports. Recently, algorithms based on deep convolutional networks achieve high performance on text detection tasks. However, for text localization in fixed-layout documents, such algorithms detect word bounding boxes individually, which ignores the layout information. This paper presents a novel architecture built on convolutional neural networks (CNNs). A global text localization network and a regional bounding-box regression network are introduced to tackle the problem in a coarse-to-fine manner. The text localization network simultaneously locates word bounding points, which takes the layout information into account. The bounding-box regression network inputs the features pooled from arbitrarily sized RoIs and refine the localizations. These two networks share their convolutional features and are trained jointly. A typical type of fixed-layout documents: ID cards, is selected to evaluate the effectiveness of the proposed system. These networks are trained on data cropped from nature scene images, and synthetic data produced by a synthetic text generation engine. Experiments show that our approach locates high accuracy word bounding boxes and achieves state-of-the-art performance.