International Science Index

International Journal of Mathematical and Computational Sciences

Survey of Methods for Solutions of Spatial Covariance Structures and Their Limitations
In modelling environment processes, we apply multidisciplinary knowledge to explain, explore and predict the Earth's response to natural human-induced environmental changes. Thus, the analysis of spatial-time ecological and environmental studies, the spatial parameters of interest are always heterogeneous. This often negates the assumption of stationarity. Hence, the dispersion of the transportation of atmospheric pollutants, landscape or topographic effect, weather patterns depends on a good estimate of spatial covariance. The generalized linear mixed model, although linear in the expected value parameters, its likelihood varies nonlinearly as a function of the covariance parameters. As a consequence, computing estimates for a linear mixed model requires the iterative solution of a system of simultaneous nonlinear equations. In other to predict the variables at unsampled locations, we need to know the estimate of the present sampled variables. The geostatistical methods for solving this spatial problem assume covariance stationarity (locally defined covariance) and uniform in space; which is not apparently valid because spatial processes often exhibit nonstationary covariance. Hence, they have globally defined covariance. We shall consider different existing methods of solutions of spatial covariance of a space-time processes at unsampled locations. This stationary covariance changes with locations for multiple time set with some asymptotic properties.
All Digits Number Benford Law in Financial Statement
Background: The research aims to explore if there is fraud in a financial statement, use the Act stated that Benford's distribution all digits must compare the number will follow the trend of lower number. Research methods: This research uses all the analysis number being in Benford's law. After receiving the results of the analysis of all the digits, the author makes a distinction between implementation using the scale above and below 5%, the rate of occurrence of difference. With the number which have differences in the range of 5%, then can do the follow-up and the detection of the onset of fraud against the financial statements. The findings: From the research that has been done can be drawn the conclusion that the average of all numbers appear in the financial statements, and compare the rates of occurrence of numbers according to the characteristics of Benford's law. About the existence of errors and fraud in the financial statements of PT medco Energy Tbk did not occur. Conclusions: The study concludes that Benford's law can serve as indicator tool in detecting the possibility of in financial statements to case studies of PT Medco Energy Tbk for the fiscal year 2000-2010.
Asset Liability Modelling for Pension Funds by Introducing Leslie Model for Population Dynamics: Evidence from Lithuania
The paper investigates the current demographic trends that exert the sustainability of pension systems in most EU regions. Several drivers usually compose the demographic challenge, coming from the structure and trends of population in the country. As the case of research, three main variables of demographic risk in Lithuania have been singled out and have been used in making up the analysis. Over the last two decades, the country has presented a peculiar demographic situation characterized by pessimistic fertility trends, negative net migration rate and rising life expectancy that make the significant changes in labor-age population. This study, therefore, sets out to assess the relative impact of these risk factors both individually and in aggregate, while assuming economic trends to evolve historically. The evidence is presented using data of pension funds that operate in Lithuania and are financed by defined-contribution plans. To achieve this goal, the discrete-time pension fund’s value model is developed that reflects main operational modalities: contribution income from current participants and new entrants, pension disbursement and administrative expenses; it also fluctuates based on returns from investment activity. Age-structured Leslie population dynamics model has been integrated into the main model to describe the dynamics of fertility, migration and mortality rates upon age. Validation has concluded that Leslie model adequately fits the current population trends in Lithuania. The elasticity of pension system is examined using Loimaranta efficiency as a measure for comparison of plausible long-term developments of demographic risks. With respect to the research question, it was found that demographic risks have different levels of influence on future value of aggregated pension funds: The fertility rates have the highest importance, while mortality rates give only a minor impact. Further studies regarding the role of trying out different economic scenarios in the integrated model would be worthwhile.
Design of Control Chart Using Resubmitted Sampling
This article attempts to develop a new variable control chart based on resubmission of the sample, which explains the conditions where resampling is permitted and accepted as original sampling scheme. The equations for proposed chart along with the corresponding critical values of control constraint are derived based on the linear optimization procedure for normally distributed dataset, hence the decision can be made with a more accurate and reliable way about non-conforming items. Moreover, the efficiency of the proposed variable control chart using resubmitted sampling is evaluated and compared with existing variable chart based on single sampling scheme. For illustrative purpose, a real life example is presented to explain in detail the implementation of the derived results precisely and sophisticatedly.
Estimation of Missing Values in Aggregate Level Spatial Data
Missing data is a common problem in spatial analysis especially at the aggregate level. Missing can either occur in covariate or in response variable or in both in a given location. Many missing data techniques are available to estimate the missing data values but not all of these methods can be applied on spatial data since the data are autocorrelated. Hence there is a need to develop a method that estimates the missing values in both response variable and covariates in spatial data by taking account of the spatial autocorrelation. The present study aims to develop a model to estimate the missing data points at the aggregate level in spatial data by accounting for (a) Spatial autocorrelation of the response variable (b) Spatial autocorrelation of covariates and (c) Correlation between covariates and the response variable. Estimating the missing values of spatial data requires a model that explicitly account for the spatial autocorrelation. The proposed model not only accounts for spatial autocorrelation but also utilizes the correlation that exists between covariates, within covariates and between a response variable and covariates. The precise estimation of the missing data points in spatial data will result in an increased precision of the estimated effects of independent variables on the response variable in spatial regression analysis.
Spatially Distributed Rainfall Prediction Based on Automated Kriging for Landslide Early Warning Systems
The precise prediction of rainfall in space and time is a key element to most landslide early warning systems. Unfortunately, the spatial variability of rainfall in many early warning applications is often disregarded. A common simplification is to use uniformly distributed rainfall to characterize aerial rainfall intensity. With spatially differentiated rainfall information, real-time comparison with rainfall thresholds or the implementation in process-based approaches might form the basis for improved landslide warnings. This study suggests an automated workflow from the hourly, web-based collection of rain gauge data to the generation of spatially differentiated rainfall predictions based on kriging. Because the application of kriging is usually a labor intensive task, a simplified and consequently automated variogram modeling procedure was applied to up-to-date rainfall data. The entire workflow was carried out purely with open source technology. Validation results, albeit promising, pointed out the challenges that are involved in pure distance based, automated geostatistical interpolation techniques for ever-changing environmental phenomena over short temporal and spatial extent.
Privacy Protection in Optional Unrelated Question RRT Models
Sihm et al. (2014) introduced modified unrelated question optional RRT model in both binary and quantitative response situations wherein the prevalence of the sensitive variable and the sensitivity level of the underlying sensitive question could be estimated simultaneously without using a split sample approach. In this study, we propose a three-stage optional unrelated question RRT model for both binary and quantitative response situations which combines the essence of Sihm et al. (2014) model and the three-stage optional additive RRT model proposed by Mehta et al. (2012). The efficiencies of Sihm et al. (2014) model and proposed three-stage optional unrelated question RRT model are compared using simulations. The privacy measures of the two models in question are also discussed. Comparisons for the binary models are based on Lanke (1976) measure while Yan et al. (2009) measure is used to compare the privacy measures for the quantitative models.
Complexity of Algorithms of New Methods to Find Highest Common Factor and Least Common Multiple
Different approaches to find Highest Common Factor and Least Common Multiple for a list of numbers are introduced. The methods itself are step by step procedure in the form of algorithms. The new algorithms lead to first ever computer programs to find these for list of numbers. Complexity for these algorithms is discussed.
Design and Analysis of Adaptive Type-I Progressive Hybrid Censoring Plan under Step Stress Partially Accelerated Life Testing Using Competing Risk
Statistical distributions have long been employed in the assessment of semiconductor devices and product reliability. The power function-distribution is one of the most important distributions in the modern reliability practice and can be frequently preferred over mathematically more complex distributions, such as the Weibull and the lognormal, because of its simplicity. Moreover, it may exhibit a better fit for failure data and provide more appropriate information about reliability and hazard rates in some circumstances. This study deals with estimating information about failure times of items under step-stress partially accelerated life tests for competing risk based on adoptive type-I progressive hybrid censoring criteria. The life data of the units under test is assumed to follow Mukherjee-Islam distribution. The point and interval maximum-likelihood estimations are obtained for distribution parameters and tampering coefficient. The performances of the resulting estimators of the developed model parameters are evaluated and investigated by using a simulation algorithm.
Efficiency of Robust Heuristic Gradient Based Enumerative and Tunneling Algorithms for Constrained Integer Programming Problems
This paper presents performance of a two robust gradient-based heuristic optimization procedures reported earlier by the authors. Both these reported procedures consist of two distinct phases for locating the global optimum of integer problems with a linear or non-linear objective function subject to linear or non-linear constraints. In both procedures, in the first phase, a local minimum of the function is found using the gradient approach coupled with hemstitching moves when a constraint is violated in order to return the search to the feasible region. In the second phase, in one optimization procedure the second sub-procedure examines 3n integer combinations on the boundary and within hypercube volume encompassing the result neighboring the result from the first phase and in the second optimization procedure a tunneling function is constructed at the local minimum of the first phase so as to find another point on the other side of the barrier where the function value is approximately the same. In the next cycle, the search for the global optimum commences in both optimization procedures again using this new found point as the starting vector. The search continues and repeated for various step sizes along the function gradient as well as that along the vector normal to the violated constraints until no improvement in optimum value is found. The results from both these proposed optimization methods are presented and compared with one provided by popular MS Excel Solver that is provided within MS Office suite and other published results.
Global Stability Analysis of a Coupled Model for Healthy and Cancerous Cells Dynamics in Acute Myeloid Leukemia
The mathematical formulation of biomedical problems is an important phase to understand and predict the dynamic of the controlled population. In this paper we perform a stability analysis of a coupled model for healthy and cancerous cells dynamics in Acute Myeloid Leukemia, this represents our first aim. Second, we illustrate the effect of the interconnection between healthy and cancer cells. The PDE-based model is transformed to a nonlinear distributed state space model (delay system). For an equilibrium point of interest, necessary and sufficient conditions of global asymptotic stability are given. Thus, we came up to give necessary and sufficient conditions of global asymptotic stability of the origin and the healthy situation and control of the dynamics of normal hematopoietic stem cells and cancerous during myelode Acute leukemia. Simulation studies are given to illustrate the developed results.
Exact and Approximate Controllability of Nuclear Dynamics Using Bilinear Controls
The control problem associated with nuclear dynamics is represented by nonlinear integro-differential equation with additive controls. To control chain reaction, certain amount of neutrons is added into (or withdrawn out of) chamber as and when required. It is not realistic. So, we can think of controlling the reactor dynamics by bilinear control, which enters the system as coefficient of state. In this paper, we study the approximate and exact controllability of parabolic integro-differential equation controlled by bilinear control with non-homogeneous boundary conditions in bounded domain. We prove the existence of control and propose an explicit control strategy.
A Forbidden-Minor Characterization for the Class of Co-Graphic Matroids Which Yield the Graphic Element-Splitting Matroids
The n-point splitting operation on graphs is used to characterize 4-connected graphs with some more operations. Element splitting operation on binary matroids is a natural generalization of the notion of n-point splitting operation on graphs. The element splitting operation on a graphic (cographic) matroid may not yield a graphic (cographic) matroid. Characterization of graphic (cographic) matroids whose element splitting matroids are graphic (cographic) is known. The element splitting operation on a co-graphic matroid, in general may not yield a graphic matroid. In this paper, we give a necessary and sufficient condition for the cographic matroid to yield a graphic matroid under the element splitting operation. In fact, we prove that the element splitting operation, by any pair of elements, on a cographic matroid yields a graphic matroid if and only if it has no minor isomorphic to M(K4); where K4 is the complete graph on 4 vertices.
A Study on Class of Elliptic Partial Differential Equations with Measure Data
In view of variational approach, a class of boundary value problems for semilinear elliptic PDEs with a given pair of measure data on a bounded domain with smooth boundary in an N dimensional Euclidean space has been studied. The existence of weak solutions of the PDE in the space of absolutely integrable functions has been investigated. In general, the problem does not possess a solution for every pair of measure data, however, if a solution exists then the solution is unique. The set of pairs of measures for which the solution of PDE of this type exists, is called a pair of ‘good measure’. Suppose, to the sequence of good measures there exists a corresponding sequence of solutions to the PDE. In general if the sequence of pairs of the measure data converges weakly to a pair of measures and the corresponding sequence of solutions also converges to a function in the space of absolutely integrable functions, then this limit function need not be a solution to the considered problem with this limiting pair of measure as the given measure data. This abstract addresses the issue for the case of a general linear elliptic operator instead of a Laplacian. In other words, the problem which will be addressed in this talk not only poses the question of the existence of a solution but also puts forward the question that for which class of measures does the boundary value problem admit a solution?. It has been observed that how the notion of ‘reduced limit’ answers this question. Furthermore, a relation between the weak limit and the reduced limit of the sequences of measure has been proposed.
Existence and Concentration of Solutions for a Class of Elliptic Partial Differential Equations Involving p-Biharmonic Operator
The perturbed nonlinear Schrodinger equation involving the p-biharmonic and the p-Laplacian operators involving a real valued parameter and a continuous real valued potential function defined over the N- dimensional Euclidean space has been considered. By the variational technique, an existence result pertaining to a nontrivial solution to this non-linear partial differential equation has been proposed. Further, by the Concentration lemma, the concentration of solutions to the same problem defined on the set consisting of those elements where the potential function vanishes as the real parameter approaches to infinity has been addressed.
Triangular Hesitant Fuzzy TOPSIS Approach in Investment Projects Management
The presented study develops a decision support methodology for multi-criteria group decision-making problem. The proposed methodology is based on the TOPSIS (Technique for Order Performance by Similarity to Ideal Solution) approach in the hesitant fuzzy environment. The main idea of decision-making problem is a selection of one best alternative or several ranking alternatives among a set of feasible alternatives. Typically, the process of decision-making is based on an evaluation of certain criteria. In many MCDM problems (such as medical diagnosis, project management, business and financial management, etc.), the process of decision-making involves experts' assessments. These assessments frequently are expressed in fuzzy numbers, confidence intervals, intuitionistic fuzzy values, hesitant fuzzy elements and so on. However, a more realistic approach is using linguistic expert assessments (linguistic variables). In the proposed methodology both the values and weights of the criteria take the form of linguistic variables, given by all decision makers. Then, these assessments are expressed in triangular fuzzy numbers. Consequently, proposed approach is based on triangular hesitant fuzzy TOPSIS decision-making model. Following the TOPSIS algorithm, first, the fuzzy positive ideal solution (FPIS) and the fuzzy negative-ideal solution (FNIS) are defined. Then the ranking of alternatives is performed in accordance with the proximity of their distances to the both FPIS and FNIS. Based on proposed approach the software package has been developed, which was used to rank investment projects in the real investment decision-making problem. The application and testing of the software were carried out based on the data provided by the ‘Bank of Georgia’.
Bi-Criteria Vehicle Routing Problem for Possibility Environment
A multiple criteria optimization approach for the solution of the Fuzzy Vehicle Routing Problem (FVRP) is proposed. For the possibility environment the levels of movements between customers are calculated by the constructed simulation interactive algorithm. The first criterion of the bi-criteria optimization problem - minimization of the expectation of total fuzzy travel time on closed routes is constructed for the FVRP. A new, second criterion – maximization of feasibility of movement on the closed routes is constructed by the Choquet finite averaging operator. The FVRP is reduced to the bi-criteria partitioning problem for the so called “promising” routes which were selected from the all admissible closed routes. The convenient selection of the “promising” routes allows us to solve the reduced problem in the real-time computing. For the numerical solution of the bi-criteria partitioning problem the -constraint approach is used. An exact algorithm is implemented based on D. Knuth’s Dancing Links technique and the algorithm DLX. The Main objective was to present the new approach for FVRP, when there are some difficulties while moving on the roads. This approach is called FVRP for extreme conditions (FVRP-EC) on the roads. Also, the aim of this paper was to construct the solving model of the constructed FVRP. Results are illustrated on the numerical example where all Pareto-optimal solutions are found. Also, an approach for more complex model FVRP with time windows was developed. A numerical example is presented in which optimal routes are constructed for extreme conditions on the roads.
Fuzzy Multi-Objective Approach for Emergency Location Transportation Problem
In the modern world emergency management decision support systems are actively used by state organizations, which are interested in extreme and abnormal processes and provide optimal and safe management of supply needed for the civil and military facilities in geographical areas, affected by disasters, earthquakes, fires and other accidents, weapons of mass destruction, terrorist attacks, etc. Obviously, these kinds of extreme events cause significant losses and damages to the infrastructure. In such cases, usage of intelligent support technologies is very important for quick and optimal location-transportation of emergency service in order to avoid new losses caused by these events. Timely servicing from emergency service centers to the affected disaster regions (response phase) is a key task of the emergency management system. Scientific research of this field takes the important place in decision-making problems. Our goal was to create an expert knowledge-based intelligent support system, which will serve as an assistant tool to provide optimal solutions for the above-mentioned problem. The inputs to the mathematical model of the system are objective data, as well as expert evaluations. The outputs of the system are solutions for Fuzzy Multi-Objective Emergency Location-Transportation Problem (FMOELTP) for disasters’ regions. The development and testing of the Intelligent Support System were done on the example of an experimental disaster region (for some geographical zone of Georgia) which was generated using a simulation modeling. Four objectives are considered in our model. The first objective is to minimize an expectation of total transportation duration of needed products. The second objective is to minimize the total selection unreliability index of opened humanitarian aid distribution centers (HADCs). The third objective minimizes the number of agents needed to operate the opened HADCs. The fourth objective minimizes the non-covered demand for all demand points. Possibility chance constraints and objective constraints were constructed based on objective-subjective data. The FMOELTP was constructed in a static and fuzzy environment since the decisions to be made are taken immediately after the disaster (during few hours) with the information available at that moment. It is assumed that the requests for products are estimated by homeland security organizations, or their experts, based upon their experience and their evaluation of the disaster’s seriousness. Estimated transportation times are considered to take into account routing access difficulty of the region and the infrastructure conditions. We propose an epsilon-constraint method for finding the exact solutions for the problem. It is proved that this approach generates the exact Pareto front of the multi-objective location-transportation problem addressed. Sometimes for large dimensions of the problem, the exact method requires long computing times. Thus, we propose an approximate method that imposes a number of stopping criteria on the exact method. For large dimensions of the FMOELTP the Estimation of Distribution Algorithm’s (EDA) approach is developed.
Algorithms Utilizing Wavelet to Solve Various Partial Differential Equations
The article traces developments and evolution of various algorithms developed for solving partial differential equations using the significant combination of wavelet with few already explored solution procedures. The approach depicts a study over a decade of traces and remarks on the modifications in implementing multi-resolution of wavelet, finite difference approach, finite element method and finite volume in dealing with a variety of partial differential equations in the areas like plasma physics, astrophysics, shallow water models, modified Burger equations used in optical fibers, biology, fluid dynamics, chemical kinetics etc.
Exact Solutions of K(N,N)-Type Equations Using Jacobi Elliptic Functions
In this paper, modified K(n,n) and K(n+1,n+1) equations have been solved using mapping methods which give a variety of solutions in terms of Jacobi elliptic functions. The solutions when m approaches 0 and 1, with m as the modulus of the JEFs have also been deduced. The role of constraint conditions has been discussed.
Solutions of Fuzzy Transportation Problem Using Best Candidates Method and Different Ranking Techniques
Transportation Problem (TP) is based on supply and demand of commodities transported from one source to the different destinations. Usual methods for finding solution of transportation problems are North-West corner Rule, Least Cost Method Vogel’s Approximation Method etc. The transportation costs are considered as imprecise numbers described by fuzzy numbers which are more realistic and general in nature. In this study the Best Candidate Method (BCM) is applied. For ranking Centroid Ranking Technique (CRT) and Robust Ranking Technique have been adopted to transform the fuzzy transportation problem and the above methods are applied to EDWARDS Vacuum Company, Crawley, in West Sussex in the United Kingdom. A Comparative study is also given. We see that the transportation cost is reduced to minimum when we use the CRT under the BCM.
Comparing Numerical Accuracy of Solutions of Ordinary Differential Equations (ODE) Using Taylor's Series Method, Euler's Method and Runge-Kutta (RK) Method
The ordinary differential equations (ODE) represent a natural framework for mathematical modeling of many real-life situations in the field of engineering, control systems, physics, chemistry and astronomy etc. Such type of differential equations can be solved by analytical methods or by numerical methods. If the solution is calculated using analytical methods, it is done through calculus theories, and thus requires a longer time to solve. In this paper, we compare the numerical accuracy of the solutions given by the three main types of one-step initial value solvers: Taylor’s Series Method, Euler’s Method and Runge-Kutta Fourth Order Method (RK4). The comparison of accuracy is obtained through comparing the solutions of ordinary differential equation given by these three methods. Furthermore, to verify the accuracy; we compare these numerical solutions with the exact solutions.
Minimizing the Impact of Covariate Detection Limit in Logistic Regression
In many epidemiological and environmental studies covariate measurements are subject to the detection limit. In most applications, covariate measurements are usually truncated from below which is known as left-truncation. Because the measuring device, which we use to measure the covariate, fails to detect values falling below the certain threshold. In regression analyses, it causes inflated bias and inaccurate mean squared error (MSE) to the estimators. This paper suggests a response-based regression calibration method to correct the deleterious impact introduced by the covariate detection limit in the estimators of the parameters of simple logistic regression model. Compared to the maximum likelihood method, the proposed method is computationally simpler, and hence easier to implement. It is robust to the violation of distributional assumption about the covariate of interest. In producing correct inference, the performance of the proposed method compared to the other competing methods has been investigated through extensive simulations. A real-life application of the method is also shown using data from a population-based case-control study of non-Hodgkin lymphoma.
The Grade Six Pupils' Learning Styles and Their Achievements and Difficulties on Fractions Based on Kolb's Model
One of the ultimate goals of any nation is to produce competitive manpower and this includes Philippines. Inclination in the field of Mathematics has a significant role in achieving this goal. However, Mathematics, as considered by most people, is the most difficult subject matter along with its topics to learn. This could be manifested from the low performance of students in national and international assessments. Educators have been widely using learning style models in identifying the way students learn. Moreover, it could be the frontline in knowing the difficulties held by each learner in a particular topic specifically concepts pertaining to fractions. However, as what many educators observed, students show difficulties in doing mathematical tasks and in great degree in dealing with fractions most specifically in the district of Datu Odin Sinsuat, Maguindanao. This study focused on the Datu Odin Sinsuat district grade six pupils’ learning styles along with their achievements and difficulties in learning concepts on fractions. Five hundred thirty-two pupils from ten different public elementary schools of the Datu Odin Sinsuat districts were purposively used as the respondents of the study. A descriptive research using the survey method was employed in this study. Quantitative analysis on the pupils’ learning styles on the Kolb’s Learning Style Inventory (KLSI) and scores on the mathematics diagnostic test on fraction concepts were made using this method. The simple frequency and percentage counts were used to analyze the pupils’ learning styles and their achievements on fractions. To determine the pupils’ difficulties in fractions, the index of difficulty on every item was determined. Lastly, the Kruskal-Wallis Test was used in determining the significant difference in the pupils’ achievements on fractions classified by their learning styles. This test was set at 0.05 level of significance. The minimum H-Value of 7.82 was used to determine the significance of the test. The results revealed that the pupils of Datu Odin Sinsuat districts learn fractions in varied ways as they are of different learning styles. However, their achievements in fractions are low regardless of their learning styles. Difficulties in learning fractions were found most in the area of Estimation, Comparing/Ordering, and Division Interpretation of Fractions. Most of the pupils find it very difficult to use fraction as a measure, compare or arrange series of fractions and use the concept of fraction as a quotient.
Lowering Error Floors by Concatenation of Low-Density Parity-Check (LDPC) and Array Code
Low-density parity-check (LDPC) codes have been shown to deliver capacity approaching performance, however, problematic graphical structures (e.g. trapping sets) in the Tanner graph of some LDPC codes can cause high error floors in bit-error-ratio (BER) performance under conventional sum-product algorithm (SPA). This paper presents a serial concatenation scheme to avoid the trapping sets and lower the error floors of LDPC code. The outer code in the proposed concatenation is the LDPC and the inner code is a high rate array code. This approach applies an interactive hybrid process between the BCJR decoding for the array code and the SPA for the LDPC code together with bit-pinning and bit-flipping techniques. Margulis code of size (2640, 1320) has been used for the simulation and it has been shown that the proposed concatenation and decoding scheme can considerably improve the error floor performance with minimal rate loss.
Effect of Internal Heat Generation on Free Convective Power Law Variable Temperature Past Vertical Plate Considering Exponential Variable Viscosity and Thermal Diffusivity
The flow and heat transfer characteristics of a convection with temperature-dependent viscosity and thermal diffusivity along a vertical plate with internal heat generation effect have been studied. The plate temperature is assumed to follow a power law of the distance from the leading edge. The resulting governing two-dimensional equations are transformed using suitable transformations and then solved numerically by using fifth order Runge-Kutta-Fehlberg scheme with a modified version of the Newton-Raphson shooting method. The effects of the various parameters such as variable viscosity parameter β_1, the thermal diffusivity parameter β_2, heat generation parameter c and the Prandtl number Pr on the velocity and temperature profiles, as well as the local skin- friction coefficient and the local Nusselt number are presented in tabular form. Our results suggested that the presence of internal heat generation leads to increase flow than that of without exponentially decaying heat generation term.
A Comparative Study of Additive and Nonparametric Regression Estimators and Variable Selection Procedures
One of the biggest challenges in nonparametric regression is the curse of dimensionality. Additive models are known to overcome this problem by estimating only the individual additive effects of each covariate. However, if the model is misspecified, the accuracy of the estimator compared to the fully nonparametric one is unknown. In this work, the efficiency of completely nonparametric regression estimators such as the Loess is compared to the estimators that assume additivity in several situations, including additive and non-additive regression scenarios. The comparison is done by computing the oracle mean square error of the estimators with regards to the true nonparametric regression function. Then, a backward elimination selection procedure based on the Akaike Information Criteria is proposed, which is computed from either the additive or the nonparametric model. Simulations show that if the additive model is misspecified, the percentage of time it fails to select important variables can be higher than that of the fully nonparametric approach. A dimension reduction step is included when nonparametric estimator cannot be computed due to the curse of dimensionality. Finally, the Boston Housing dataset is analyzed using the proposed backward elimination procedure and the selected variables are identified.
On the Fractional Integration of Generalized Mittag-Leffler Type Functions
In this paper, the generalized fractional integral operators of two generalized Mittag-Leffler type functions are investigated. The special cases of interest involve the generalized $M$-series and $K$-function, both introduced by Sharma. The two pairs of theorems established herein generalize recent results about left- and right-sided generalized fractional integration operators applied here to the $M$-series and the $K$-function. The note also results in important applications in physics and mathematical engineering.
Computational Fluid Dynamics Simulation of Gas-Liquid Phase Stirred Tank
A Computational Fluid Dynamics (CFD) technique has been applied to simulate the gas-liquid phase in double stirred tank of Rushton impeller. Eulerian-Eulerian model was adopted to simulate the multiphase with standard correlation of Schiller and Naumann for drag co-efficient. The turbulence was modeled by using standard k-ε turbulence model. The present CFD model predicts flow pattern, local gas hold-up, and local specific area. It also predicts local kLa (mass transfer rate) for single impeller. The predicted results were compared with experimental and CFD results of published literature. The predicted results are slightly over predicted with the experimental results; however, it is in reasonable agreement with other simulated results of published literature.
Analysis of Overall Thermo-Elastic Properties of Random Particulate Nanocomposites with Various Interphase Models
In the paper, a (hierarchical) approach to analysis of thermo-elastic properties of random composites with interphases is outlined and illustrated. It is based on the statistical homogenization method – the method of conditional moments – combined with recently introduced notion of the energy-equivalent inhomogeneity which, in this paper, is extended to include thermal effects. After exposition of the general principles, the approach is applied in the investigation of the effective thermo-elastic properties of a material with randomly distributed nanoparticles. The basic idea of equivalent inhomogeneity is to replace the inhomogeneity and the surrounding it interphase by a single equivalent inhomogeneity of constant stiffness tensor and coefficient of thermal expansion, combining thermal and elastic properties of both. The equivalent inhomogeneity is then perfectly bonded to the matrix which allows to analyze composites with interphases using techniques devised for problems without interphases. From the mechanical viewpoint, definition of the equivalent inhomogeneity is based on Hill’s energy equivalence principle, applied to the problem consisting only of the original inhomogeneity and its interphase. It is more general than the definitions proposed in the past in that, conceptually and practically, it allows to consider inhomogeneities of various shapes and various models of interphases. This is illustrated considering spherical particles with two models of interphases, Gurtin-Murdoch material surface model and spring layer model. The resulting equivalent inhomogeneities are subsequently used to determine effective thermo-elastic properties of randomly distributed particulate composites. The effective stiffness tensor and coefficient of thermal extension of the material with so defined equivalent inhomogeneities are determined by the method of conditional moments. Closed-form expressions for the effective thermo-elastic parameters of a composite consisting of a matrix and randomly distributed spherical inhomogeneities are derived for the bulk and the shear moduli as well as for the coefficient of thermal expansion. Dependence of the effective parameters on the interphase properties is included in the resulting expressions, exhibiting analytically the nature of the size-effects in nanomaterials. As a numerical example, the epoxy matrix with randomly distributed spherical glass particles is investigated. The dependence of the effective bulk and shear moduli, as well as of the effective thermal expansion coefficient on the particle volume fraction (for different radii of nanoparticles) and on the radius of nanoparticle (for fixed volume fraction of nanoparticles) for different interphase models are compared to and discussed in the context of other theoretical predictions. Possible applications of the proposed approach to short-fiber composites with various types of interphases are discussed.