Wednesday, June 5, 2019

Performance Measure of PCA and DCT for Images

Performance Measure of PCA and DCT for ImagesGenerally, in Image Processing the displacement is the basic technique that we apply in order to study the characteristics of the Image under s give nonice. Under this process here we present a method in which we ar analyzing the performance of the ii methods namely, PCA and DCT. In this thesis we atomic number 18 going to analyze the system by first of all nurture the set for particular n angiotensin converting enzyme Of characters and then analyzing the performance for the two methods by calculating the error in this two methods.This thesis referred and tested the PCA and DCT shiftation techniques.PCA is a technique which involves a use which mathematically veers numerate of probably related parameters into smaller number of parameters whose evaluates dont change called jumper cable parcels. The pristine capitulum component grades for much variability in the selective information, and separately succeeding component a ccounts for much of the remaining variability. Depending on the application field, it is also called the separate Karhunen-Love transform (KLT), the Hotelling transform or proper orthogonal hogwash (POD).DCT expresses a series of finitely m any(prenominal) selective information points in damage of a message of cosine functions oscillating at different frequencies.Transformations be big to numerous applications in science and engineering, from lossy condensing of audio and patterns (where small high-frequency components send away be discarded), to spectral methods for the numerical solution of partial differential equations.CHAPTER 1INTRODUCTION1.1 IntroductionOver the past few years, several face experience systems have been proposed based on principal components depth psychology (PCA) 14, 8, 13, 15, 1, 10, 16, 6. Although the details vary, these systems can all be expositd in terms of the same preprocessing and run-time steps. During preprocessing, they register a impulsi on of m reproduction get a lines to each another(prenominal) and unroll each image into a vector of n pixel levers. Next, the base image for the purport is subtracted from eachand the resulting centered images are placed in a gallery hyaloplasm M. Element i j of M is the ith pixel from the jth image. A covariance intercellular substance W = MMT characterizes the distri only whenion of the m images in n. A subset of the Eigenvectors of W are use as the alkali vectors for a subspace in which to compare gallery and novel probe images. When sorted by decreasing Eigen regard as, the full set of unit length Eigenvectors map an orthonormal basis where the first direction corresponds to the direction of maximum variance in the images, the second the next outstandingst variance, etc. These basis vectors are the Principle Components of the gallery images. erst the Eigenspace is computed, the centered gallery images are projected into this subspace. At run-time, recognition is acc omplished by projecting a centeredprobe image into the subspace and the nearest gallery image to the probe image is selected as its match. There are umpteen differences in the systems referenced. Some systems as tallye that the images are registered prior to face recognition 15, 10, 11, 16 among the rest, a variety of techniques are employ to identify s sluiceth cranial nerve lark abouts and register them to each other. Different systems may use different distance measures when matching probe images to the nearest gallery image. Different systems select different numbers of Eigenvectors (usually those like to the largest k Eigenvalues) in order to compress the information and to improve accuracy by eliminating Eigenvectors corresponding to noise rather than opineingful variation. To help evaluate and compare individual steps of the face recognition process, Moon and Phillips created the FERET face database, and performed initial comparisons of some super C distance measures for otherwise identical systems 10, 11, 9. This paper extends their work on, presenting gain ground comparisons of distance measures over the FERET database and examining alternative way of selecting subsets of Eigenvectors. The Principal Component abbreviation (PCA) is one of the most palmy techniques that have been used in image recognition and compression. PCA is a statistical method under the broad title of factor epitome. The exercise of PCA is to reduce the large dimensionality of the data space (observed variables) to the smaller intrinsic dimensionality of feature space (independent variables), which are needed to describe the data economically. This is the case when there is a strong correlational statistics between observed variables. The jobs which PCA can do are prediction, redundancy removal, feature extraction, data compression, etc. Because PCA is a classical technique which can do something in the linear domain, applications having linear models are suitable , much(prenominal) as signal processing, image processing, system and control theory, communications, etc. Face recognition has many applicable areas. Moreover, it can be categorized into face identification, face classification, or sex determination. The most multipurpose applications contain crowd surveillance, video content indexing, personal identification (ex. drivers license), mugful shots matching, entrance security, etc. The main idea of using PCA for face recognition is to express the large 1-D vector of pixels constructed from 2-D facial image into the compact principal components of the feature space. This can be called eigen space projection. Eigen space is calculated by identifying the eigenvectors of the covariance ground substance derived from a set of facial images(vectors). The details are described in the by-line section.PCA computes the basis of a space which is redeed by its training vectors. These basis vectors, actually eigenvectors, computed by PCA are i n the direction of the largest variance of the training vectors. As it has been said earlier, we call them eigenfaces. Each eigenface can be viewed a feature. When a particular face is projected onto the face space, its vector into the face space describe the importance of each of those features in the face. The face is expressed in the face space by its eigenface coefficients (or weights). We can handle a large input vector, facial image, only by taking its small weight vector in the face space. This opines that we can reconstruct the original face with some error, since the dimensionality of the image space is much larger than that of face space.A face recognition system using the Principal Component Analysis (PCA) algorithm. Automatic face recognition systems try to find the identity element of a disposed(p) face image according to their memory. The memory of a face recognizer is generally simulated by a training set. In this project, our training set consists of the feature s extracted from known face images of different persons. Thus, the task of the face recognizer is to find the most similar feature vector among the training set to the feature vector of a given up test image. Here, we want to recognize the identity of a person where an image of that person (test image) is given to the system. You impart use PCA as a feature extraction algorithm in this project. In the training phase, you should extract feature vectors for each image in the training set. Let A be a training image of person A which has a pixel resolution of M N (M rows, N columns). In order to extract PCA features of A, you will first convert the image into a pixel vector A by concatenating each of the M rows into a individual vector. The length (or, dimensionality) of the vector A will be M N. In this project, you will use the PCA algorithm as a dimensionality reduction technique which transforms the vector A to a vector A which has a imensionality d where d M N. For each trai ning image i, you should calculate and store these feature vectors i. In the recognition phase (or, testing phase), you will be given a test image j of a known person. Let j be the identity (name) of this person. As in the training phase, you should compute the feature vector of this person using PCA and obtain j . In order to identify j , you should compute the similarities between j and all of the feature vectors is in the training set. The similarity between feature vectors can be computed using Euclidean distance. The identity of the most similar i will be the output of our face recognizer. If i = j, it sozzleds that we have correctly identified the person j, otherwise if i 6= j, it soakeds that we have misclassified the person j.1.2 Thesis structureThis thesis work is divided into five chapters as follows.Chapter 1 IntroductionThis introductory chapter is in brief explains the procedure of transformation in the Face Recognition and its applications. And here we explained the scope of this research. And finally it gives the structure of the thesis for friendly usage.Chapter 2 Basis of Transformation Techniques.This chapter gives an intro to the Transformation techniques. In this chapter we have introduced two transformation techniques for which we are going to perform the analysis and result are used for face recognition purposeChapter 3 Discrete Cosine TransformationIn this chapter we have continued the part from chapter 2 about transformations. In this other method ie., DCT is introduced and analysis is makeChapter 4 Implementation and resultsThis chapter presents the simulated results of the face recognition analysis using MATLAB. And it gives the explanation for each and all step of the design of face recognition analysis and it gives the tested results of the transformation algorithms.Chapter 5 Conclusion and Future workThis is the final chapter in this thesis. Here, we conclude our research and discussed about the achieved results of this resea rch work and suggested future work for this research.CHAPTER 2BASICs of Image Transform Techniques2.1 IntroductionNow a days Image Processing has been gained so much of importance that in every field of science we apply image processing for the purpose of security as rise up as increasing demand for it. Here we apply two different transformation techniques in order study the performance which will be helpful in the detection purpose. The computation of the performance of the image given for testing is performed in two stepsPCA (Principal Component Analysis)DCT (Discrete Cosine Transform)2.2 Principal Component AnalysisPCA is a technique which involves a procedure which mathematically transforms number of possibly gibe variables into smaller number of uncorrelated variables called principal components. The first principal component accounts for much variability in the data, and each succeeding component accounts for much of the remaining variability. Depending on the application fi eld, it is also called the distinguishable Karhunen-Love transform (KLT), the Hotelling transform or proper orthogonal decomposition (POD).Now PCA is mostly used as a tool in exploration of data analysis and for making prognostic models. PCA also involves calculation for the Eigen value decomposition of a data covariance matrix or singular value decomposition of a data matrix, usually after mean centring the data from each attri simplye. The results of this analysis technique are usually shown in terms of component scores and also as loadings.PCA is solid Eigen based multivariate analysis. Its action can be termed in terms of as edifying the inner arrangement of the data in a shape which give details of the mean and variance in the data. If there is any multivariate data then its visualized as a set if coordinates in a multi dimensional data space, this algorithm allows the users having pictures with a lower expression reveal a shadow of object in view from a higher aspect view w hich reveals the true informative nature of the object.PCA is very most related to aspect analysis, some statistical software packages purposely conflict the two techniques. True aspect analysis makes different assumptions about the original mannequin and then solves eigenvectors of a little different medium.2.2.1 PCA ImplementationPCA is mathematically delineate as an orthogonal linear transformation technique that transforms data to a novel coordinate system, such that the greatest variance from any projection of data comes to lie on the first coordinate, the second greatest variance on the second coordinate, and so on. PCA is theoretically the optimum transform technique for given data in least square terms.For a data matrix, XT, with zero empirical mean ie., the empirical mean of the distribution has been subtracted from the data set, where each row represents a different repetition of the experiment, and each column gives the results from a particular probe, the PCA transfo rmation is given byWhere the matrix is an m-by-n diagonal matrix, where diagonal elements ae non-negative and WVT is the singular value decomposition ofX.Given a set of points in Euclidean space, the first principal component part corresponds to the line that passes through the mean and minimizes the sum of squared errors with those points. The second principal component corresponds to the same part after all the correlation terms with the first principal component has been subtracted from the points. Each Eigen value indicates the part of the variance ie., correlated with each eigenvector. Thus, the sum of all the Eigen values is equal to the sum of squared distance of the points with their mean divided by the number of dimensions. PCA rotates the set of points around its mean in order to align it with the first few principal components. This moves as much of the variance as possible into the first few dimensions. The values in the remaining dimensions tend to be very highly corre lated and may be dropped with minimal loss of information. PCA is used for dimensionality reduction. PCA is optimal linear transformation technique for keeping the subspace which has largest variance. This advantage comes with the price of greater computational requirement. In distinguishable cosine transform, Non-linear dimensionality reduction techniques tend to be more computationally demanding in comparison with PCA.Mean subtraction is necessary in performing PCA to break that the first principal component describes the direction of maximum variance. If mean subtraction is not performed, the first principal component will instead correspond to the mean of the data. A mean of zero is needed for finding a basis that minimizes the mean square error of the approximation of the data.Assuming zero empirical mean (the empirical mean of the distribution has been subtracted from the data set), the principal component w1 of a data set x can be delineate asWith the first k1 component, t he kth component can be found by subtracting the first k 1 principal components from xand by substituting this as the new data set to find a principal component inThe other transform is therefore equivalent to finding the singular value decomposition of the data matrix X,and then obtaining the space data matrix Y by projecting X down into the reduced space defined by only the first L singular vectors, WLThe matrix W of singular vectors of X is equivalently the matrix W of eigenvectors of the matrix of observed covariances C = X XT,The eigenvectors with the highest eigen values correspond to the dimensions that have the strongest correlation in the data set (see Rayleigh quotient).PCA is equivalent to empirical orthogonal functions (EOF), a name which is used in meteorology.An auto-encoder neuronic network with a linear hidden layer is similar to PCA. Upon convergence, the weight vectors of the K neurons in the hidden layer will form a basis for the space spanned by the first K pri ncipal components. Unlike PCA, this technique will not necessarily produce orthogonal vectors.PCA is a popular primary technique in pattern recognition. But its not optimized for class separability. An alternative is the linear discriminant analysis, which does take this into account.2.2.2 PCA Properties and LimitationsPCA is theoretically the optimal linear scheme, in terms of least mean square error, for compressing a set of high dimensional vectors into a set of lower dimensional vectors and then reconstructing the original set. It is a non-parametric analysis and the answer is unique and independent of any hypothesis about data probability distribution. However, the latter two properties are regarded as weakness as wholesome as strength, in that being non-parametric, no prior knowledge can be incorporated and that PCA compressions often incur loss of information.The applicability of PCA is limited by the assumptions5 made in its derivation. These assumptions areWe assumed the o bserved data set to be linear combinings of certain basis. Non-linear methods such as kernel PCA have been real without assuming linearity.PCA uses the eigenvectors of the covariance matrix and it only finds the independent axes of the data under the Gaussian assumption. For non-Gaussian or multi-modal Gaussian data, PCA obviously de-correlates the axes. When PCA is used for clustering, its main limitation is that it does not account for class separability since it makes no use of the class label of the feature vector. There is no guarantee that the directions of maximum variance will contain good features for discrimination.PCA simply performs a coordinate rotation that aligns the transformed axes with the directions of maximum variance. It is only when we believe that the observed data has a high signal-to-noise ratio that the principal components with larger variance correspond to interesting dynamics and lower ones correspond to noise.2.2.3 Computing PCA with covariance method Following is a detailed description of PCA using the covariance method . The intention is to transform a given data set X of dimension M to an alternative data set Y of smaller dimension L. Equivalently we are seeking to find the matrix Y, where Y is the KLT of matrix XOrganize the data setSuppose you have data comprising a set of observations of M variables, and you want to reduce the data so that each observation can be described with only L variables, L Write as column vectors, each of which has M rows.Place the column vectors into a single matrix X of dimensions M - N.Calculate the empirical meanFind the empirical mean along each dimension m = 1,,M.Place the calculated mean values into an empirical mean vector u of dimensions M - 1.Calculate the deviations from the meanMean subtraction is an integral part of the solution towards finding a principal component basis that minimizes the mean square error of approximating the data. Hence we play along by centering the data as follo wsSubtract the empirical mean vector u from each column of the data matrix X.Store mean-subtracted data in the M - N matrix B.where h is a 1-N row vector of all1sFind the covariance matrixFind the M - M empirical covariance matrix C from the outer product of matrix B with itselfwhereis the expected value operator,is the outer product operator, andis the conjugate transpose operator. enchant note that the information in this section is indeed a bit fuzzy. Outer products apply to vectors, for tensor cases we should apply tensor products, but the covariance matrix in PCA, is a sum of outer products between its sample vectors, indeed it could be represented as B.B*. See the covariance matrix sections on the discussion page for more information.Find the eigenvectors and eigenvalues of the covariance matrixCompute the matrix V of eigenvectors which diagonalizes the covariance matrix Cwhere D is the diagonal matrix of eigenvalues of C. This step will typically involve the use of a computer -based algorithm for computing eigenvectors and eigenvalues. These algorithms are readily available as sub-components of most matrix algebra systems, such as MATLAB78, Mathematica9, SciPy, IDL(Interactive Data Language), or GNU Octave as well as OpenCV.Matrix D will take the form of an M - M diagonal matrix, whereis the mth eigenvalue of the covariance matrix C, andMatrix V, also of dimension M - M, contains M column vectors, each of length M, which represent the M eigenvectors of the covariance matrix C.The eigenvalues and eigenvectors are ordered and paired. The mth eigenvalue corresponds to the mth eigenvector.Rearrange the eigenvectors and eigenvaluesSort the columns of the eigenvector matrix V and eigenvalue matrix D in order of decreasing eigenvalue.Make sure to maintain the correct pairings between the columns in each matrix.Compute the cumulative energy content for each eigenvectorThe eigenvalues represent the distribution of the source datas energy among each of the eigenve ctors, where the eigenvectors form a basis for the data. The cumulative energy content g for the mth eigenvector is the sum of the energy content crosswise all of the eigenvalues from 1 through mSelect a subset of the eigenvectors as basis vectorsSave the first L columns of V as the M - L matrix WwhereUse the vector g as a guide in choosing an appropriate value for L. The goal is to choose a value of L as small as possible while achieving a reasonably high value of g on a percentage basis. For example, you may want to choose L so that the cumulative energy g is above a certain threshold, like 90 percent. In this case, choose the smallest value of L such thatConvert the source data to z-scoresCreate an M - 1 empirical standard deviation vector s from the square foundation of each element along the main diagonal of the covariance matrix CCalculate the M - N z-score matrix(divide element-by-element)Note While this step is useful for various applications as it normalizes the data set w ith respect to its variance, it is not integral part of PCA/KLTProject the z-scores of the data onto the new basisThe projected vectors are the columns of the matrixW* is the conjugate transpose of the eigenvector matrix.The columns of matrix Y represent the Karhunen-Loeve transforms (KLT) of the data vectors in the columns of matrixX.2.2.4 PCA DerivationLet X be a d-dimensional random vector expressed as column vector. Without loss of generality, assume X has zero mean. We want to find a Orthonormal transformation matrix P such thatwith the constraint thatis a diagonal matrix andBy substitution, and matrix algebra, we obtainWe now haveRewrite P as d column vectors, soand asSubstituting into equation above, we obtainNotice that in , Pi is an eigenvector of the covariance matrix of X. Therefore, by finding the eigenvectors of the covariance matrix of X, we find a projection matrix P that satisfies the original constraints.CHAPTER 3DISCRETE Cosine transform3.1 IntroductionA discrete c osine transform (DCT) expresses a sequence of finitely many data points in terms of a sum of cosine functions oscillating at different frequencies. DCTs are important to numerous applications in engineering, from lossy compression of audio and images, to spectral methods for the numerical solution of partial differential equations. The use of cosine rather than sine functions is critical in these applications for compression, it turns out that cosine functions are much more efficient, whereas for differential equations the cosines express a particular choice of boundary conditions.In particular, a DCT is a Fourier-related transform similar to the discrete Fourier transform (DFT), but using only real numbers. DCTs are equivalent to DFTs of roughly twice the length, operating on real data with even symmetry (since the Fourier transform of a real and even function is real and even), where in some variants the input and/or output data are shifted by half a sample. There are eight standa rd DCT variants, of which four are common.The most common variant of discrete cosine transform is the type-II DCT, which is often called simply the DCT its inverse, the type-III DCT, is correspondingly often called simply the inverse DCT or the IDCT. Two related transforms are the discrete sine transforms (DST), which is equivalent to a DFT of real and odd functions, and the modified discrete cosine transforms (MDCT), which is based on a DCT of overlapping data.3.2 DCT formsFormally, the discrete cosine transform is a linear, invertible function F RN - RN, or equivalently an invertible N - N square matrix. There are several variants of the DCT with slightly modified definitions. The N real numbers x0, , xN-1 are transformed into the N real numbers X0, , XN-1 according to one of the formulasDCT-ISome authors further multiply the x0 and xN-1 terms by 2, and correspondingly multiply the X0 and XN-1 terms by 1/2. This makes the DCT-I matrix orthogonal, if one further multiplies by an ov erall scale factor of , but breaks the direct correspondence with a real-even DFT.The DCT-I is exactly equivalent, to a DFT of 2N 2 real numbers with even symmetry. For example, a DCT-I of N=5 real numbers abcde is exactly equivalent to a DFT of eight real numbers abcdedcb, divided by two.Note, however, that the DCT-I is not defined for N less than 2.Thus, the DCT-I corresponds to the boundary conditions xn is even around n=0 and even around n=N-1 similarly for Xk.DCT-IIThe DCT-II is probably the most commonly used form, and is often simply referred to as the DCT.This transform is exactly equivalent to a DFT of 4N real inputs of even symmetry where the even-indexed elements are zero. That is, it is half of the DFT of the 4N inputs yn, where y2n = 0, y2n + 1 = xn for , and y4N n = yn for 0 Some authors further multiply the X0 term by 1/2 and multiply the resulting matrix by an overall scale factor of . This makes the DCT-II matrix orthogonal, but breaks the direct correspondence wi th a real-even DFT of half-shifted input.The DCT-II implies the boundary conditions xn is even around n=-1/2 and even around n=N-1/2 Xk is even around k=0 and odd around k=N.DCT-IIIBecause it is the inverse of DCT-II (up to a scale factor, see below), this form is sometimes simply referred to as the inverse DCT (IDCT).Some authors further multiply the x0 term by 2 and multiply the resulting matrix by an overall scale factor of , so that the DCT-II and DCT-III are transposes of one another. This makes the DCT-III matrix orthogonal, but breaks the direct correspondence with a real-even DFT of half-shifted output.The DCT-III implies the boundary conditions xn is even around n=0 and odd around n=N Xk is even around k=-1/2 and even around k=N-1/2.DCT-IVThe DCT-IV matrix becomes orthogonal if one further multiplies by an overall scale factor of .A variant of the DCT-IV, where data from different transforms are overlapped, is called the modified discrete cosine transform (MDCT) (Malvar, 19 92).The DCT-IV implies the boundary conditions xn is even around n=-1/2 and odd around n=N-1/2 similarly for Xk.DCT V-VIIIDCT types I-IV are equivalent to real-even DFTs of even order, since the corresponding DFT is of length 2(N1) (for DCT-I) or 4N (for DCT-II/III) or 8N (for DCT-VIII). In principle, there are actually four additional types of discrete cosine transform, corresponding essentially to real-even DFTs of logically odd order, which have factors of N in the denominators of the cosine arguments.Equivalently, DCTs of types I-IV imply boundaries that are even/odd around any a data point for both boundaries or halfway between two data points for both boundaries. DCTs of types V-VIII imply boundaries that even/odd around a data point for one boundary and halfway between two data points for the other boundary.However, these variants seem to be rarely used in practice. ace reason, perhaps, is that FFT algorithms for odd-length DFTs are generally more complicated than FFT algor ithms for even-length DFTs (e.g. the simplest radix-2 algorithms are only for even lengths), and this increased intricacy carries over to the DCTs as described below.Inverse transformsUsing the normalisation conventions above, the inverse of DCT-I is DCT-I multiplied by 2/(N-1). The inverse of DCT-IV is DCT-IV multiplied by 2/N. The inverse of DCT-II is DCT-III multiplied by 2/N and vice versa. give care for the DFT, the normalization factor in front of these transform definitions is merely a convention and differs between treatments. For example, some authors multiply the transforms by so that the inverse does not require any additional multiplicative factor. Combined with appropriate factors of 2 (see above), this can be used to make the transform matrix orthogonal. 4-dimensional DCTsMultidimensional variants of the various DCT types follow straightforwardly from the one-dimensional definitions they are simply a separable product (equivalently, a composition) of DCTs along each d imension.For example, a two-dimensional DCT-II of an image or a matrix is simply the one-dimensional DCT-II, from above, performed along the rows and then along the columns (or vice versa). That is, the 2d DCT-II is given by the formula (omitting normalization and other scale factors, as above)Two-dimensional DCT frequenciesTechnically, computing a two- (or multi-) dimensional DCT by sequences of one-dimensional DCTs along each dimension is known as a row-column algorithm. As with multidimensional FFT algorithms, however, there exist other methods to compute the same thing while performing the computations in a different order.The inverse of a multi-dimensional DCT is retributory a separable product of the inverse(s) of the corresponding one-dimensional DCT(s), e.g. the one-dimensional inverses applied along one dimension at a time in a row-column algorithm.The image to the right shows combination of horizontal and vertical frequencies for an 8 x 8 (N1 = N2 = 8) two-dimensional DCT . Each step from left to right and top to base is an increase in frequency by 1/2 cycle. For example, moving right one from the top-left square yields a half-cycle increase in the horizontal frequency. other move to the right yields two half-cycles. A move down yields two half-cycles horizontally and a half-cycle vertically. The source data (88) is transformed to a linear combination of these 64 frequency squares.Chapter 4IMPLEMENTATION AND RESULTS4.1 IntroductionIn previous chapters (chapter 2 and chapter 3), we get the theoretical knowledge about the Principal Component Analysis and Discrete Cosine Transform. In our thesis work we have seen the analysis of both transform. To execute these tasks we chosen a platform called MATLAB, stands for matrix laboratory. It is an efficient expression for Digital image processing. The image processing toolbox in MATLAB is a collection of different MATAB functions that extend the capability of the MATLAB environment for the solution of digit al image processing problems. 134.2 Practical implementation of Performance analysisAs discussed earlier we are going to perform analysis for the two transform methods, to the images as,

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.