Elisa Robotti^{*}, Marcello Manfredi and Emilio Marengo  
Department of Sciences and Technological Innovation, University of Piemonte Orientale–Viale Michel 11–15121 Alessandria, Italy  
Corresponding Author :  Elisa Robotti Department of Sciences and Technological Innovation University of Piemonte Orientale–Viale Michel 11–15121 Alessandria, Italy Tel: +39 0131 360272 Fax: +39 0131 360250 Email: elisa.robotti@mfn.unipmn.it 
Received September 18, 2013; Accepted January 27, 2014; Published January 29, 2014  
Citation: Robotti E, Manfredi M, Marengo E (2014) Biomarkers Discovery through Multivariate Statistical Methods: A Review of Recently Developed Methods and Applications in Proteomics. J Proteomics Bioinform S3:003. doi: 10.4172/jpb.S3003  
Copyright: © 2014 Robotti E, et al. This is an openaccess article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.  
Related article at Pubmed Scholar Google 
Visit for more related articles at Journal of Proteomics & Bioinformatics
Biomarkers discovery is a discipline achieving increasing importance since it provides diagnostic/prognostic markers and may permit to investigate and understand the mechanism of development of the pathology, possibly suggesting new biomolecular therapeutic targets. Biomarkers discovery in proteomics is hampered by the use of highthroughput techniques providing a great number of candidates among which the true biomarkers have to be searched for. Moreover, often a small number of samples are available. Two main problems arise when biomarkers have to be searched for in such datasets: 1) the identification of reliable markers, avoiding false positives due to chance correlations; 2) the exhaustive identification of all candidate markers, to obtain a complete snapshot of the effect investigated.
Biomarkers can be identified by two approaches: classical monovariate methods, where each biomarker is considered as independent (Student’s ttest, MannWhitney test etc.) or multivariate methods, able to take into consideration the correlation structure of the data (i.e. interactions). These last ones are certainly to be preferred and should achieve the best compromise between the best predictive ability (accomplished through the use of variable selection procedures and exhaustivity. Here, we review the most recent applications of multivariate methods for the identification of biomarkers in proteomics with particular regard to the statistical methods exploited.
Keywords 
Biomarker identification; Multivariate statistical methods; Variable selection procedures; PCA; PLSDA; Random forests; SVM; RankingPCA 
Introduction 
The field of biomarker discovery has recently developed as one of the most challenging areas of research. Different and heterogeneous disciplines involve the study of biomarkers: clinical and environmental chemistry, ecotoxicology, food research, plant and animal biology. In general, we can speak of biomarkers identification whenever a pool of features differentiating two or more groups of samples has to be identified. For what concerns the particular application to proteomics, the search for biomarkers involves the identification of features responsible for sample differentiation from a quite wide range of instrumental applications: from classical proteomics [123], exploiting 2DPAGE and 2DDIGE, to mass spectrometrybased approaches based on MALDITOF [2430] and SELDITOF profiling [3142], HPLCMS [4357] or shotgun approaches [58,59]. 
All these heterogeneous applications have a common denominator: all of them are characterized by a great number of variables characterizing each sample, among which biomarkers have to be searched for. Moreover, the great number of variables is often accompanied by a small number of samples available. Two main problems therefore arise when biomarkers have to be identified in such datasets: 
 The identification of reliable markers, avoiding false positives, as is the case when chance correlations occur; 
 The exhaustive identification of all candidate markers, necessary to obtain a complete snapshot of the effect investigated (a disease, a drug effect, a ripening effect etc). 
In all the areas of biomarker search, the final datasets in which biomarkers are search for are characterised by a series of samples divided in two or more classes and described by a series of variables or features. These datasets can be evaluated by the two different strategies: 
1) classical statistical methods that identify significant biomarkers by monovariate statistical tests where each biomarker is considered as independent from the others; 
2) multivariate methods, able to take into consideration the correlation structure of the data and the synergies and antagonisms (i.e. interactions) existing among the potential biomarkers. This approach is certainly more effective: it generally ensures diagnostic and prognostic performances superior to single markers in terms of sensitivity, specificity and reliability. 
The classic approach to the identification of biomarkers involves the evaluation of the variables showing a statistically significant different behavior between two groups of samples (e.g. control vs. pathological, etc.) by classical statistical tests, applied to each biomarker candidate separately, focused to the evaluation of the type I error comparison wise (for each hypothesis independently) or experiment wise (testing all hypotheses together). The second alternative is certainly to be preferred since the type I error probability increases with the number of tests (for k hypotheses=(1α)^{k}). Usually Student’s ttests are applied applying a correction that takes into account the number of multiple tests available (Bonferroni’s with subsequent modifications [60], Dunn’s, Sikak’s and Dunnet’s [6163]). Also non parametric tests can be applied, like the MannWhitney test [60] or procedures based on the analysis of variance both in its twoway (ANOVA) or multiway (MANOVA) versions [60]. MANOVA allows in particular comparing more than two groups of samples. All these strategies have the disadvantage of considering each variable independently from the others so that the correlation structure of the data is not taken into account. This is an important drawback since proteomic datasets are usually quite correlated and groups of molecules showing similar or opposite behavior can be easily encountered, as they often belong to common biochemical pathways where they are transformed into or react between themselves. 
Multivariate methods instead compare two or more groups of samples considering the relationships existing between the candidate molecules, i.e. the synergic or antagonistic effects of different factors. They are usually coupled to variable selection procedures [16,17] to provide a reduced set of candidate biomarkers with the best predictive ability; standard variable selection tools are in fact aimed to the identification of the most exiguous set of variables with the best predictive ability. It is important to point out that biomarkers are useful not only for diagnostic purposes but also for functional studies to investigate the mechanism of action of a disease or of a particular effect (e.g. ripening, pollution, etc.): from this point of view, highthroughput techniques provide a lot of information that should not be neglected to provide a complete view of the investigated effect. It is therefore the authors’ opinion that exhaustivity should be addressed to as well [64,65]. 
Some reviews have recently been published on biomarker discovery in several fields of proteomics: food and beverages [66], toxicology [67], molecular plant physiology [68], clinical proteomics with the particular application to colorectal cancer [69], Alzheimer’s disease [70], traumatic brain injury [71], type1 diabetes [72], urine molecular profiling [73,74]. Other authors instead reviewed the application of instrumental or statistical tools for biomarkers identification in proteomics: LCMS [75], MALDI [76], machine learning algorithms [77] and statistical data processing in clinical proteomics [78]. 
Here, we review the most recent applications of multivariate statistical methods for the identification of biomarkers in proteomics with a particular attention to the statistical methods exploited. Literature applications are grouped by the exploited multivariate methods; a brief description of the theoretical aspects of each method is presented at the beginning of each paragraph. 
Discussion 
Multivariate methods compare two or more groups of samples considering the relationships existing between the variables (candidate molecules): this corresponds to considering their synergic or antagonistic effects, i.e. their interactions. In the field of biomarkers identification, these methods usually belong to one of these categories: 
1) unsupervised pattern recognition methods, also called clustering methods, based on no a priori information: the evaluation of the existence of groups of samples is suggested by the statistical method itself; 
2) supervised classification methods, based on a priori information about the membership of each sample to a specific class: aimed to the identification of the variables responsible for the separation of the samples in the different classes. 
All methods are applied to datasets where each sample is described by a series of variables: spot volumes in datasets from 2DPAGE and 2DDIGE or peak intensities in mass spectrometrybased approaches. It is important to point out that multivariate methods can be applied to all the quantitative proteomics analysis, a fundamental requirement for biomarker discovery analysis. Here, only the methods most recently applied in literature in the proteomic field will be discussed. Table 1 reports all the main statistical methods used in proteomics. 
Classification performance 
Classification can be evaluated by representing the classification matrix C, where true classes are reported on rows and assigned classes on columns; correct classifications are therefore reported along the matrix diagonal (elements c_{gg}), while wrong assignments are reported in extradiagonal elements (c_{ij}, i≠j). From the classification matrix it is possible to calculate some parameters accounting for the performance of classification: 
Nonerror rate (NER%): the percentage of overall correct assignments: 
Where, c_{gg} are the diagonal elements of the classification matrix; n is the overall number of objects; G is the overall number of classes. 
Selectivity Sg: the percentage of nonoverlap between the classes: 
Where, c_{i,j} are the extradiagonal elements (the sum runs on the rows of the classification matrix); n is the number of overall objects; n_{g} is the number of objects in class g; G is the overall number of classes. 
Specificity Sp_{g}: the NER% of each class: 
Where, c_{gg} is the number of correct classification of class g; n_{g} is the umber of objects of class g. 
These parameters can be calculated both in calibration and crossvalidation for obtaining an evaluation of the predictive ability of the classification model. 
Unsupervised pattern recognition methods 
Several applications of pattern recognition methods are present in literature: in the most of papers Principal Component Analysis (PCA) and/or hierarchical clustering are applied both to classical proteomics (2DPAGE or 2DDIGE maps) and to mass spectrometric data from direct sample analysis by MALDITOF, SELDITOF or HPLCMS. Both these approaches can be applied to all quantitative analytical methods exploited in proteomics and show no strict constraints on the number of variables and/or samples that have to be present in the dataset: in the case of PCA, when a smaller number of samples than of variables is present, the maximum number of PCs that can be calculated equals the number of samples present. 
Principal Component Analysis (PCA): PCA is by far the most widespread pattern recognition tool in proteomics: in this section a selection of the applications regarding exclusively PCA or hierarchical clustering will be presented, a more exhaustive list being reported in table 2. 
PCA [79,80] represents the objects, described by the original variables, in a new reference system characterised by new variables called Principal Components (PCs). PCs are calculated hierarchically: the first PC accounts for the maximum variance contained in the original dataset, while subsequent PCs account for the maximum residual variance; in this way systematic variations are explained in the first PCs while experimental noise and random variations are contained in the last ones. The PCs are linear combinations of the original variables. They are also orthogonal to each other, thus accounting for independent sources of information. A graphical example is presented in Figure 1a, where PCs are calculated in a simple case where only two original variables X1 and X2 are present. PCs are often used for dimensionality reduction due to their hierarchical nature: the original variables can be substituted by a smaller number of significant PCs, containing only relevant information. PCA provides two main tools for data analysis (Figure 1b): 
 the scores, representing the coordinates of the samples in the space given by PCs; 
 the loadings, representing the coefficients of the linear combination describing each PC, i.e. the weights of the original variables on each PC. 
The samples are usually graphically represented in the space given by the PCs (through their scores) to allow the identification of groups of samples showing similar (samples close one to the other in the graph) or different (samples far from each other) behaviours. By looking at the corresponding loading plot (where the weights of each original candidate molecule on each PC are represented), it is possible to identify the variables that are responsible for the similar or different behaviours detected for the samples in the score plot. 
In the most of applications PCA is exploited for the visualization of the results in terms of separation achieved for the different classes in the space given by the relevant PCs. However, PCA provides important information worth of being exploited for data interpretation as well: the loadings provide information on the candidate biomarkers and their up or down regulation in the investigated case study; some applications are also present that exploit this information for the identification of the most relevant biomarkers. 
Some applications regard the identification of biomarkers in ecotoxicological studies: for the biomonitoring of mussels after the Prestige’s oil spill by 2DPAGE [81]; to identify biomarkers of exposure to alkylphenol [31] and estrogens [32] in plasma samples of Atlantic cods by SELDITOF MS; to study the proteomic pattern of Sydney Rock oysters exposed to metal contamination by 2DPAGE [2]. 
However, the majority of applications is in the field of clinical biomarker discovery to investigate (Table 2): a) coronary syndromes [82]; b) the influence of clotting time on the protein composition of serum samples [45]; c) acute myocardial infarction [4] d) inflamed and noninflamed colon biopsies [5] in ulcerative colitis depression [25]; e) nonsmall cell lung cancer [3,9]; f) whether a high intake of industrial or ruminant trans fatty acids affects the plasma proteome [8]; g) amphetamine in rats [26]; h) thyroid proliferative diseases [34]; i) idiopathic pneumonia syndrome following allogeneic stem cell transplantation [83]; j) the effect of Trichostatin A on pancreatic ductal carcinoma cells [20]; the development of a special class of aptamers [84], called SOMAmers (slow offrate modified aptamers), which bind specifically to proteins in body fluids. 
Other applications regard the use of PCA as a tool for assessing measurement reproducibility in biomarker research: Liggett et al. [33] applied PCA to SELDITOF profiles of repeated measurements of a reference human serum standard, while Govorukhina et al. [44] developed an improved sample preparation method to perform future comparative analyses of samples from a serum bank of cervical cancer patients. 
Cluster analysis and hierarchical clustering: Cluster analysis techniques [79,80,85] allow the identification of groups of samples or of descriptors in a dataset. The most used clustering methods belong to the class of the agglomerative hierarchical methods [79,80], where the samples are grouped (linked together) on the basis of a measure of their similarity. The most similar samples or groups of samples are linked first. The final result is a graph, called dendrogram, where the samples are represented on the X axis and are connected at decreasing levels of similarity along the Y axis. The groups can be identified by applying a horizontal cut of the dendrogram, i.e. at a particular level of dissimilarity, and identifying the number of vertical lines crossed by the horizontal cut. In Figure 2 it is reported the example of a dendrogram obtained from a 2DPAGE spot volume dataset: cutting at level 2000 identifies two clusters, corresponding to 2 different human pancreatic tumour cell lines, T3M4 and PACA44, while cutting at level 4500 identifies 4 clusters (the two cell lines treated and untreated with TrichostatinA). The results of hierarchical clustering strongly depend on the specific measure of similarity and on the linking method adopted. Clustering techniques can be applied either to the original variables or to the scores of the relevant PCs thus achieving clustering exploiting only useful sources of variation, being the experimental error eliminated in the last PCs [85]. In twoway hierarchical clustering a graphical representation of clustering of both variables and samples is provided, to visually identify cluster of samples and in the meantime provide information on the behaviour of the variables in the different clusters. This particular application is quite widespread in both proteomics and genomics. 
Hierarchical clustering has been applied in combination to PCA in a series of papers focused to the identification of biomarkers in clinical and environmental applications: to identify groups of samples with a similar behavior and/or groups of variables with a similar expression. In some cases twoway hierarchical clustering was applied [13,86], to identify groups of samples and at the same time provide information about the behavior of the variables in the identified groups. Also in this case, hierarchical clustering can be applied to data from classical proteomics by 2DPAGE or 2DDIGE, or to mass spectrometric data from MALDITOF, SELDITOF or HPLCMS. 
The most of applications (Table 2 for more details) are in the field of clinical proteomics to investigate: a) ovarian [49], prostate [11] and colorectal cancer [12]; b) systemic and invasive candidiasis [13,86]; c) the plasma proteome in a mouse intestinal tumor model [50]; d) serum peptidomics in Crohn’s disease [43]; e) liver proteomics in progressive alcoholic steatosis [10]. 
Two applications regard MALDI imaging: in the first study Deininger et al. [87] applied MALDI imaging to compare spectra from controls and patients affected by different cancers. The reconstruction of images based on PC scores allowed an unsupervised feature extraction of the dataset. Generally, these images were in good agreement with the histology of the samples. The hierarchical clustering allowed the access to the multidimensional information in the dataset and the selection of spectra classes representative for different tissue features. PCA showed that the tumor and control mucosa were separated in the first three PCs. In the second study Elsner et al. [88] applied hierarchical clustering to MALDI imaging MS results, to characterize proteomic changes found in Barrett’s adenocarcinoma and its premalignant stages and find proteins that might be used as markers for monitoring cancer development as well as for predicting regional lymph node metastasis and disease outcome. 
Two papers regard applications to environmental analysis for the assessment of marine pollution on blue mussels [14] and for evaluating the protein expression in liver and brain extracts of hakes [15]. 
Supervised classification methods 
Supervised classification tools are used to separate the objects in the classes present which are known a priori (e.g. control versus pathological) and provide the variables most responsible for their belonging to different classes (candidate biomarkers). Their application in the proteomics field is focused both to the development of diagnostic tools and to the identification of the differences existing between the classes, to shed light on the mechanism of action of the effect under investigation (e.g. a disease or a new drug in the biomedical field, ripening in food research, a cultivar in plant biology, etc.). Here, only the methods already applied to the identification of biomarkers in the proteomics field will be presented (Table 1). 
All the presented methods can be applied to all quantitative analytical methods exploited in proteomics. Methods based on PCA or on projection to latent structures (as PLSDA and OPLS) show no strict constraints on the number of variables and/or samples that have to be present in the dataset: they provide a substantial dimensionality reduction and, when a smaller number of samples than of original variables is present, the maximum number of latent structures that can be considered equals the number of samples. Other methods, as Linear Discriminant Analysis (LDA) can be applied when the number of variables overcomes the samples available: when this constraint is not observed, LDA can be applied to PCs or coupled to variable selection procedures, giving a final model containing a maximum number of original variables equaling the number of samples. 
Several papers have recently appeared on the development and application of classification tools to proteomic datasets (2DE, 2DDIGE, MALDITOF, SELDITOF, HPLSMS data). These tools represent certainly a better approach than the use of pattern recognition methods since they provide mathematical models able to clearly identify sets of candidate biomarkers and provide information on classification performances. In the field of proteomics, a great panorama of different techniques have been recently developed and applied: some of the most significant papers in this field will be presented here separated according to the classification tool exploited in the study, a more exhaustive list of applications being presented in table 3. 
Principal Component Analysis–Discriminant Analysis (PCADA): PCADA was first developed by Hoogerbrugge et al. [89]. In this method PCA is exploited as a dimensionality reduction tool. The space defined by the relevant PCs is used to identify linear discriminant functions (LDFs) able to separate the samples in the classes present. The first LDF, D_{1}, is defined as: 
where: 
B=between group covariance matrix; 
W=within group covariance matrix. 
The second LDF, D_{2}, is defined in the same way but under the condition that D_{1} and D_{2} are independent. In two class problems, only one LDF is provided, able to discriminate the samples in the two classes present. 
An interesting paper appeared by Smit et al. [90] presenting a strategy for the statistical validation of discrimination models in proteomic studies by PCADA applied to SELDITOF analyses of serum samples. Different tools as permutation tests, single and double crossvalidation are combined to provide a statistically sound procedure for biomarker discovery. The crossvalidation steps were combined with a variable selection method called rank products. The strategy was applied coupled to PCADA but any other classifier could be used. A dataset containing serum samples from Gaucher patients and healthy controls was used as test case. The variable selection procedure can be briefly described as follows: the discriminant vector found with PCADA represents the differences between the control and the diseased groups. Since the largest peaks in this vector are most important for the discrimination, the m/z values can be selected on the basis of their absolute value in the discriminant vector. In a 10fold crossvalidation, 10 different discriminant vectors are found in which the importance of the m/z values is different. For each of the discriminant vectors, the m/z values are ranked according to their absolute value; the m/z value with the largest absolute value gets rank 1, the next largest gets rank 2, etc. The 10 ranks of each m/z value are multiplied to obtain the rank product, and the m/z values with the lowest rank product are the ones with the largest discriminative power. Single crossvalidation in combination with rank products can be used for variable selection, while the prediction error associated with the selected variables is determined with double crossvalidation. The model presents Sg=89% and the Sp_{g}=90%. The validation of the discrimination models with a combination of permutation tests and double crossvalidation helps to avoid erroneous results which may result from undersampling, as it is often the case in proteomics. 
The same authors [91] applied the procedure to a dataset containing serum samples from breast cancer patients and healthy controls obtaining S_{g}=82% and Sp_{g}=86%. The final classification exploited a majority voting scheme from the ensemble classifier. 
RankingPCA 
RankingPCA is a ranking method proposed by Marengo, Robotti et al. [64,65,92] based on the description of the original data by means of PCs. The development of this method has its roots in the necessity of finding the optimal compromise between best predictive ability of the final model and the exhaustivity of the biomarkers search. PCA is used to describe the data coupled to a ranking procedure of the candidate biomarkers in forward search. When PCA is applied to a dataset where the samples belong to two classes and their belonging to class 1 or 2 is the leading information, should provide only the first PC as relevant for the samples discrimination. When the variable ranking procedure is applied in forward search, one variable is added at each cycle. The first variable selected is the one providing the best separation between the classes on the first PC (Figure 3a). The addition of another discriminating variable further improves the distance between the two classes on PC1 (Figure 3b). If successively a nondiscriminating variable is added, instead, the two classes will not be further separated on PC_{1}: the third variable will show a small weight on the first PC and will be mostly explained by other PCs. However, these subsequent PCs will not be considered relevant since they are not related to class separation. Sometimes, more than one PC could be necessary for class separation (Figure 3c): in this case different independent sources of information related to the class structure are present. A third variable acting in this way could be explained mostly by another PC (Figure 3d) or could show large weight on both PC_{1} and the other PC responsible for class separation: in both cases, the second PC accounting for class separation will be included in the model. The algorithm is structured in two steps: 
1) Selection of the first variable. The first variable is the one providing the largest distance between the two class centroids while preserving class compactness; 
2) Selection of the subsequent variables. In the subsequent steps the variable chosen at each step is the one providing the maximum increase of distance between the two centroids in the space given by the relevant PCs, while preserving class compactness. 
At each cycle, the possibility of including more than one PC is considered through the exhaustive evaluation of all possible classification models containing from 1 to a userselected maximum number of PCs. The choice of the most discriminating variable is performed by crossvalidation: the final model therefore represents the best model according to its predictive ability. The proposed method allows the ranking of the variables according to their discrimination ability, assuring the exhaustiveness of the results. 
The first applications of this method regard the identification of biomarkers from proteomic spot volume datasets. In a first study [65] the authors investigated an artificial dataset and a real casestudy to demonstrate its principle: RankingPCA exhaustively identified the potential biomarkers and provided reliable and robust results. In another paper [64] the same method was applied to three different proteomic datasets to prove its effectiveness: 1) 8 2DE maps from adrenal nude mouse glands (4 controls and 4 affected by neuroblastoma) described by 532 spots; 2) 11 samples from nuclea of human colon cancer HCT116 cell line (6 controls and 5 treated by an HDAC inhibitor) described by 779 spots; 3) 10 samples from total lysates of human colon cancer HCT116 cell line (5 controls and 5 treated by an HDAC inhibitor) described by 525 spots. Ranking PCA provided the perfect classification of all samples and provided a more exhaustive identification of biomarkers if compared to previously published results based on the use of other classification tools. 
Another application [92] by the same authors regards the study of the proteomic changes involved in tenderization of bovine Longissimus dorsi: 4 Charolaise heifers and 4 Charolaise bull’s muscles were sampled at slaughter after early and long ageing (24°C for 12 and 26 days respectively). Protein composition of fresh muscle and of aged meat was analyzed by cartesian and polar 2D electrophoresis. RankingPCA was applied to detect proteomic modulation: meat maturation caused changes of the abundance of proteins involved in metabolic, structural, and stress related processes. 
SoftIndependent Model of ClassAnalogy (SIMCA) 
SIMCA [2123,93] is based on the independent modelling of each class by means of PCA: each class is described by its relevant PCs. The samples belonging to each class are contained in the socalled SIMCA boxes, defined by the relevant PCs of each class (an example of different SIMCA boxes is presented in Figure 4). Exploiting PCA, the classification of each sample with SIMCA is not affected by experimental uncertainty and random variations since each class is modelled only by its relevant PCs. This method is also useful when more variables than objects are available since it performs a substantial dimensionality reduction. 
The classification rule of object i is based on a Fisher’s Ftest; the residual standard deviation of each object i (i.e. its distance from the model of class g) is compared to the residual standard deviation of class g (i.e. the typical distance of class g): if their ratio is smaller than the critical F value based on the degrees of freedom and on the significance level, object i is classified in class g. With SIMCA also outliers can be identified, i.e. samples classified in none of the classes: this happens when an object lies outside all the existing SIMCA boxes (an example is given in Figure 4). 
SIMCA provides an important statistics useful for the identification of the most discriminating variables (i.e. candidate biomarkers): the Discrimination Power (DP) which is a measure of the ability of each variable to discriminate between two classes (c and g) at a time. The greater the discrimination power, the more a variable influence the classification of an object to class c or g. 
The discrimination power is positive defined, but it is not limited. 
SIMCA was applied by Marengo et al. in two studies. The first one [21] is focused on the identification of biomarkers for neuroblastoma the most common extracranial solid tumor of infancy and childhood, by 2DPAGE. The second study is devoted to identify biomarkers of mantle cell lymphoma [22] by evaluating the different expression of two different human lymphoma cell lines by 2DPAGE. When SIMCA is applied as classification method, this can be done by the analysis of the discrimination power of each candidate biomarker: usually, variables characterized by a DP larger than a selected threshold are considered significant. The same authors developed an approach [23] for identifying relevant proteins from SIMCA discriminating powers not based on a threshold level established by the operator. The method is based on a procedure consisting in two steps: 1) through a nonlinear BoxCox transformation, the population of the calculated DP values is turned into a wellknown statistical distribution (e.g., Gaussian or gamma) and 2) the relevant spots are identified, by the use of probability plots, as those characterized by a transformed DP value that does not match the reference statistical distribution. The idea underlying the procedure is that the variables characterized by a relevant DP value do not belong to the population of the spots showing homogeneous values of DPs. The method successfully allowed the identification of the relevant spots from 2D maps in several cases study. 
Partial Least Squares Discriminant Analysis (PLSDA) and Orthogonal Partial Least Squares (OPLS) 
PLS [79,80,94] is a multivariate regression method establishing a relationship between one or more dependent variables (Y) and a group of descriptors (X). X and Y variables are modeled simultaneously, to find the latent variables (LVs) in X that will predict the LVs in Y (Figure 5a). These LVs are calculated hierarchically, as for PCA. PLS was originally set up to model continuous responses but it can be applied even for classification purposes by establishing an appropriate Y related to the association of each sample to a class. The regression is then carried out between the Xblock variables and the Y just established. This application for classification purposes is called PLSDA. The conceptual difference between PCs and LVs is represented in Figure 5b: while PCs are aligned along the direction of maximum variance, LVs are aligned along the direction that maximizes the covariance between X and Y variables. 
PLSDA and related procedures based on PLS are quite widespread in proteomics and several applications have been recently reported in clinical proteomics. WestNorager et al. [95] demonstrated the feasibility of serodiagnosis of ovarian cancer by MALDIMS: 265 sera from women admitted with symptoms of a pelvic mass were used for model building. The authors developed a rigorous approach for building classification models suitable for highly multivariate data. Spectra were first aligned and zones not containing peaks were removed; finally a master list of 117 peaks defined by m/z intervals was obtained. For each interval, a PCA was calculated: if a peak represents only one underlying feature, it is expected that one PC can explain the variation; if the peak contains information from several chemical compounds, then more than one PC may be needed, but the number of PC will always be dramatically lower than the number of initial variables. Data redundancy was therefore eliminated by representing each peak by its relevant PCs. The entire procedure reduced the number of variables from more than 30000 to about 500. To test if further variable selection would improve model prediction, PLSDA models were calculated using a stepwise variable selection based on using variable importance in projection (VIP) scores that estimate the importance of each variable used in the PLSDA model: variables with VIP scores close to or greater than 1 were retained as significant. Time dependent changes in peak profiles up to 15 months after sampling were demonstrated, even when storing samples at 20°C. The best models were able to classify with 79% Sp_{g} and 56% S_{g}, i.e., an analytical accuracy of 68%. 
Rajalahti et al. [96] applied PLSDA to identify biomarker signatures in MS profiles of cerebrospinal fluid (CSF) from patients with multiple sclerosis. The low molecular weight CSF proteome from 54 patients with sclerosis and a range of other neurological diseases, as well as healthy controls, was analyzed in replicates using mass spectral profiling. PLSDA identified the most discriminatory spectral regions by the exploitation of a nonparametric discriminating variable test (DIVA) together with the socalled selectivity ratio (SR) plot. 
Finally, Yang et al. [58] identified prognostic polypeptide blood plasma biomarkers of Alzheimer’s disease (AD) progression. 119 blood plasma samples of patients with mild cognitive impairment (MCI) with different outcomes (stable and progressive MCI) were analyzed by untargeted, labelfree shotgun proteomics. Predictive biomarkers of progressive MCI were selected by OPLSDA [97], a modification of PLS developed for highly decorrelated datasets, which removes variation from X variables not correlated to Y: the final interpretation of the results is therefore easier due to the reduced complexity of the final model and usually the prediction ability of the model improves. The best model showed an accuracy of 79% in predicting progressive MCI. Some sexspecific protein biomarkers were also identified. Significant sex bias in ADspecific biomarkers underscores the necessity of selecting sexbalanced cohort in AD biomarker studies, or using sexspecific models. 
Other applications regard: the proteomic profiling of follicular and papillary thyroid tumors [18]; the analysis of peptidomics data from clinical urine samples subjected to LC/MS to identify peptidebiomarker fingerprints related to disease diagnosis and progression [98]; the identification of a serum protein profile predictive of the resistance to neoadjuvant chemotherapy (NACT) in advanced breast cancers [56]; the identification of plasma protein biomarkers for depression and schizophrenia [99]; the identification of biomarkers in colon cancer by 2DE [17]. In this last application, PLSDA was coupled to a variable selection procedure in backward elimination and compared to differential analysis carried out by classic PDQuest analysis: PLSDA with backward elimination provided a larger set of candidate biomarkers proving to be more exhaustive than classic differential analysis. 
Discriminant AnalysisBased Approaches 
LDA [79,80] is a Bayesian classification method providing the classification of the objects considering the multivariate structure of the data. In Bayesian methods, an object i, identified by its descriptor values x_{i}, is assigned to class g for which the posterior probability P(g/ x_{i}) is maximum. 
Each class is usually described by a Gaussian multivariate probability distribution, 
where the argument of the exponential function is the Mahalanobis distance between object i and the centroid of class g and takes into account the shape of the class and the correlations among the variables (it contains the covariance matrix). Each object is classified in class g if the socalled discriminant score D_{g} is minimum: 
where S_{g} is the covariance matrix of class g, is the centroid of class g, P_{g} is the prior probability of class g. 
Bayesian methods differentiate according to how the covariance matrix is chosen: in LDA it is approximated with the pooled (between the classes) covariance matrix; this corresponds to consider all the classes as having a common shape (i.e. a weighted average of the shape of the classes present). Figure 6 shows an example of LDA where two classes A and B are present in the space described by two variables X_{1} and X_{2}: while the two classes are overlapped along the two original variables, they appear separated along the discriminant function; Figure 6 reports the linear discriminant direction as a dotted line. 
The variables contained in the LDA model discriminating the classes can be chosen by a stepwise algorithm, selecting iteratively the most discriminating variables. Usually, a Forward Selection (FS) procedure is applied: the method starts with a model where no variables are included and gradually adds a variable at a time until a determined criterion of arrest of the procedure is satisfied. The variable being included in the model in each step is the one providing the greatest value of an FFisher ratio, so that the jth variable is included in the model, with p variables already included, if: 
where: 
= variance calculated for the model with p variables plus jth variable; 
RSS_{p}=residual sum of squares of the model with p variables; 
RSS_{p+j}=residual sum of squares of the model with p variables plus jth variable. 
The F value thus calculated is compared to a reference value (F_{toenter} ) usually set at values ranging from 1 (more permissive selection, including a larger number of variables in the final model) to 4 (more severe selection). 
LDA can be performed either on the original variables or on PCs. In this last case, it is possible to convert the LDA model based on the significant PCs in a model based on the original variables by means of the loadings. The combination of PCA and LDA allows the use of LDA even in cases when the data are characterized by fewer samples than variables, as it is usual in proteomics. 
Another approach useful when the number of variables overcomes that of the samples is the use of DLDA, where the covariance matric is diagonal [100]. Since this simplification is by itself not enough when many variables are not relevant for the classification and they add noise, a Feature Subset Selection (FSS), which uses only a small fraction of the initial set of variables, can be used. Nearly all DLDA based techniques [100102] use a filter approach for FSS: variables are first ranked using a statistical score and the discriminant function is built by selecting the highest ranking variables. 
Many applications are present were LDA is applied alone or in combination with other methods like CART or PCA. An interesting theoretical paper was presented by Zollanvari et al. [103] who proposed the analytical formulation for the joint sampling distribution of the actual and estimated errors of a classification rule, applied to LDA. Error estimation must in facts be used to evaluate the accuracy of a designed classifier, an issue that is critical in biomarker discovery for disease diagnosis and prognosis in genomics and proteomics. Exact results are provided in the univariate case, and a simple method is suggested to obtain an accurate approximation in the multivariate case. The analysis presented is applicable to finite training data. In particular, it applies in the case of smallsample datasets commonly found in genomics and proteomics applications. Numerical examples illustrate the analysis. 
For what regards the field of clinical chemistry, Imre et al. [28] applied LDA for the classification of cancer patients and controls based on the determination of Nglycans of human serum alpha1 acid glycoprotein (AGP). Nglycan oligosaccharides of AGP samples isolated from 43 individuals (controls and patients with lymphoma and ovarian tumor) were analyzed by MALDITOF MS. 34 different glycan structures were identified. LDA analysis showed a good separation between the three groups (NER% 88%). Crossvalidation results indicated that the method has predictive power: cancerous vs. controls showed 96% S_{g} and 93% Sp_{g}; lymphoma vs. controls + ovarian tumor cases instead 72% S_{g} and 84% Sp_{g}. 
Another study regards the application of LDA in plant proteomics. LDA was coupled to PCA by Negri et al. [16] for the identification of proteins involved in biotic and abiotic stress responses in the ripening of Pinot Noir skins. A comparative 2DE analysis of grape skins collected in three moments of ripening was carried out; PCA was applied to the spot volume dataset obtained and LDA with a variable selection procedure based on a forward stepwise search was applied to the obtained scores. This technique allowed to discriminate veraison, quite mature and mature samples, and to sort the matched spots according to their significance. 
Other applications are by Zhong et al. [38] in animal proteomics and by Cheung et al. [104] in clinical proteomics (Table 3 for details). 
Classification and Regression Tree (CART) 
This method is used to build classification rules. Classification trees [93] are built by subsequent divisions (splits) of subgroups of the original dataset in two descending subgroups with the aim of classifying the data in homogeneous groups as much as possible different one from the others. It is possible to derive a tree diagram where, starting from the root node (where the dataset is not separated), a series of nodes and branches separate; each node h represents a subgroup of the dataset. Nodes not undergoing a further split are called terminal nodes: to each terminal node a class is associated. Starting from the root node h_{1}, the samples are separated in a series of splits: in each node the split giving the most homogeneous division of the data in the two descendent nodes is selected. An example is given in Figure 7. 
Several applications of classification and regression trees are present in literature. One application is in the field of ecotoxicology [53] (Table 3 for details), while other applications are present in the field of clinical proteomics [35,36,39,54]. 
In the study by Whelan et al. [54], the authors applied LCMS/MS to identify biosignatures of breast cancer in proximal fluid samples. The authors investigate three clinically important types of breast cancer using a panel of human cell lines: HER2 positive, hormone receptor positive and HER2 negative, and triple negative (HER2, ER, PR). The most abundant secreted, sloughed, or leaked proteins released into serum free media from these breast cancer cell lines were characterized by a combination of protein fractionation methods before LCMS/MS analysis. 249 proteins were detected in the proximal fluid of 7 breast cancer cell lines. Comparison of each cell line displayed unique and consistent biosignatures regardless of the individual group classifications, demonstrating the potential for stratification of breast cancer. Predictive CART was able to categorize each cell line as HER2 positive, HER2 negative and hormone receptor positive and triple negative based on only two proteins. 
Other applications (Table 3 for more details) regard the search for biomarkers in: the response to Infliximab in Crohn’s disease [35]; hepatocellular carcinoma [39] and the predictive diagnosis of chronic allograft dysfunction by urinary proteomics [36]. All these applications regard the use of SELDITOF profiling. 
Random Forests 
Random Forests [105] is an extension of the classification trees and it is structured to grow many classification trees. The new objects are classified by each independent tree in the forest: each tree therefore gives a classification. The forest chooses the most recurrent classification (over all the trees in the forest). Each tree is grown as follows: 
a. If N objects are in the training set, N cases are sampled randomly from the data to grow the tree; 
b. If there are M variables, a number m<<M is specified such that at each node, m variables are selected at random out of the M and the best split on these m is used to split the node. The value of m is held constant during the forest growing. 
c. Each tree is grown to the largest possible extent. 
The error rate depends on: the correlation between any two trees in the forest (increasing the correlation increases the forest error rate); the strength of each individual tree in the forest (inversely correlated to the tree error rate). 
RF are quite widespread for the identification of biomarkers in proteomics, the most of applications being in the field of clinical proteomics. 
Ostroff et al. [106] investigated sera from 117 Malignant Pleural Mesothelioma (MM) cases and 142 asbestosexposed control individuals for the early detection of MM. Biomarker discovery, verification, and validation were performed using SOMAmer proteomic technology, which simultaneously measures over 1000 proteins in unfractionated biologic samples. Using univariate and multivariate approaches 64 candidate protein biomarkers were identified; a 13marker random forest classifier was derived with an AUC of 0.99 ± 0.01 in training, 0.98 ± 0.04 in independent blinded verification and 0.95 ± 0.04 in blinded validation studies. S_{g} and Sp_{g} were respectively 97% and 92% in training and 90% and 95% in blinded verification. This classifier accuracy was maintained in a second blinded validation set with a S_{g}/Sp_{g} of 90%/89% and combined accuracy of 92%. 
Other applications (Table 3 for more details) regard: plasma proteomic profiles from diseasediscordant monozygotic twins in multiple systemic autoimmune diseases by RPLCMS [57]; the investigation of how fasting for 36 h, as compared to 12h, affects the human proteome of platelets, peripheral blood mononuclear cells, plasma, urine and saliva [6]; the improvement of a multianalyte serum biomarker panel to identify lymphnode metastases in nonsmall cell lung cancer (NSCLC) with circulating autoantibody biomarkers [107]; the development of a multiplexed tumorassociated autoantibodybased blood test for detecting NSCLC [108] and the identification of biomarker panels in 2DDIGE data from sera of patients with prostate cancer [19]. 
Support Vector Machines (SVM) Approaches 
SVM [109] are machine learning algorithms that find a maximal margin hyperplane that maximizes the distance between the two classes present. A linear hyperplane is in general able to separate n samples in n+1 dimensions, therefore in highdimensional data, the use of nonlinear kernels can be avoided. The linear SVM separates the classes by a linear boundary. 
The graphical representation of a simple case is given in Figure 8: the solid line represents the border hyperplane, while the dotted lines delimit the border. Cortes and Vapnik [110] in 1995 proposed a modified maximum margin allowing misclassifications. Boser et al. [111] proposed also a modification to the original linear classifier providing a nonlinear classifier by applying the kernel trick to maximummargin hyperplanes. 
Support vector machines have been extensively used in proteomics for the identification of biomarkers, usually coupled to other multivariate tools as genetic algorithms, PCA, random forests, Canonical Correlation Analysis (CCA). This last method is a multivariate generalization of the correlation analysis, developed by Hotelling; it is used to define the relationship between 2 sets of variables [80]. When CCA is applied to multichannel signal processing, linear combinations X and Y of two meancentered multivariate random vectors [x_{1}(t),...,x_{m}(t)]^{T} and [y_{1}(t),...,y_{n}(t)]^{T} (t_{1},....,N) are defined. 
CCA computes the linear combination coefficients (i.e. regression weights) ω_{x} and ω_{y}, so that the correlation between the new variables X and Y (called canonical variates) is maximum. 
The first pair of canonical variates correspond to the eigenvectors_{x} and ω_{y} associated with the largest eigenvalue. The remaining canonical variates correspond to the remaining eigenvectors, and the associated eigenvalues are the squared canonical coefficients. The canonical variates are maximally correlated and, at the same time, uncorrelated with the previous pairs. 
Sparse Canonical Correlation Analysis (SCCA) is a modification of CCA [112] useful when CCA has to be applied to sparse data. 
Karthikeyan et al. [113] applied SVM and the genetic algorithm (GA) to plasma peptide chromatograms for identifying biomarkers of air contaminant exposures. Interrogation of chromatographic data for biomarker discovery is hampered by the stochastic variability in retention times; the difficulty is further increased when the effects of exposure (e.g. to environmental contaminants) and biological variability result in varying numbers and intensities of peaks among chromatograms. The authors developed a software to correct the time shifts in chromatographic data through iterative selection of landmark peaks and isometric interpolation to improve alignment. To illustrate the tool, plasma peptides from Fischer rats exposed for 4h to clean air or Ottawa urban particles (EHC93) were separated by HPLC with autofluorescence detection, and the retention time shifts between chromatograms were dewarped. Both dewarped and nondewarped datasets were then mined for models containing peptide peaks that best discriminate among the treatment groups. In general, models generated by dewarped datasets were able to better classify test sample chromatograms into either clean air or EHC93 exposure groups, and 0 or 24 h postrecovery time groups. Peak areas of peptides in a model that produced the best discrimination of treatment groups were analyzed by twoway ANOVA with exposure (clean air, EHC93) and recovery time (0 h, 24 h) as factors. Statistically significant (p<0.05) timedependent and exposuredependent increases and decreases were noted establishing these as biomarker candidates for further validation. 
Ahn et al. [114] identified serum biomarker panels for the diagnosis of gastric adenocarcinoma by random forests and SVM. A 29plex array platform with 29 biomarkers, consisting of 11 proteins discovered through proteomics and 18 previously known to be cancerassociated, was constructed. Via RF, 13 markers were selected for multivariate classification analysis. Then, multivariate classification analysis with RF and SVM was performed on the training set, consisting of 70 serum samples from gastric adenocarcinoma patients and 70 control samples. Each algorithm was crossvalidated on the test set to which 50 samples were assigned. The best RT and SVM algorithms showed mean accuracies of 88.3% and 89.7% respectively. These two algorithms were tested on a separate, independent blinded set of 95 gastric adenocarcinoma sera and 51 controls with mean accuracies of 89.2% and 85.6%, respectively. The biomarker panel selected by RF containing 11 markers showed a higher accuracy in the validation set. RF generally outperformed SVM, regardless of stage or tumour size even if SVM showed a higher sensitivity for small tumours. 
Other applications (Table 3 for details) regard: the use of PCA and SVM to discriminate among multiple sclerosisrelated disorders [30]; the comparative proteomic analysis of nonsmallcell lung cancer and normal controls using serum labelfree quantitative shotgun technology [59]; plasma biomarkers of tuberculosis and malaria by SCCA and SVM [112]; biomarkers associated with early renal injury by SELDITOF MS in human urine [37]; the ability to perform a clinical proteomic study using samples collected at different times from two independent clinical sites by labelfree 2DLCMS [51]; the development of a sequencespecific exopeptidase activity test for functional biomarker discovery [29]. 
Other Methods 
In this section, some papers will be presented, reporting the use of alternative methods developed by different authors to address particular drawbacks of the standard multivariate tools. 
Karimi et al. [115] proposed a novel approach for the identification of discriminatory variables in mass spectrometry by clustering of variables. In factor analysisbased discriminate models, latent variables are calculated from the data at all employed instrument channels. Since some channels are irrelevant for classification, the extracted LV’s possess mixed information from both useful and irrelevant channels. Clustering of variables (CLoVA) based on unsupervised pattern recognition was suggested but the authors as an efficient method to identify the most informative spectral regions. The m/z values were clustered into different clusters via selforganization maps. Then, the spectral data of each cluster were separately used as the input variables of classification methods such as PLSDA and extended canonical variates analysis (ECVA). The proposed method was evaluated by the analysis of two experimental data sets (ovarian and prostate cancer datasets). The method was able to detect cancerous from control samples with higher S_{g} and Sp_{g} than conventional PLSDA and ECVA. 
Chen [27] applied multivariate curve resolution (MCR) to improve proteomic mass spectra classification. The paper describes a novel proteomic pattern analysis algorithm for biomarker discovery using MALDITOF or SELDITOF. The algorithm (MCRmarker) is based on the combination of MCR with classification methods and applies singular value decomposition to select differentially expressed m/z windows. In each selected m/z window, potential biomarkers are identified from MCRresolved peak profiles that show better performance than the precise m/z values. The identified potential biomarkers are not dependent on the selection of MCR methods and consist of clearly detectable peaks, which may represent identifiable proteins, protein fragments or peptides. The algorithm was validated on two data sets from the literature. 
Carlson et al. [116] reported the biomarker clustering to address correlations in proteomic data. Existing methods for dimension reduction, i.e. PCA and related techniques, are not always satisfactory in proteomics since they provide results that are of not easy interpretation. The authors propose a preprocessing algorithm that clusters highly correlated features, using the Bayes information criterion to select an optimal number of clusters. Statistical analysis of clusters, instead of individual features, benefits from lower noise, and reduces the difficulties associated with strongly correlated data. This preprocessing tool proved to improve biomarker discovery in clinical SELDITOFMS datasets of plasma from patients with Kawasaki disease and bonemarrow cell extracts from patients with acute myeloid or acute lymphoblastic leukemia. 
Dutkowski and Gambin [117] proposed a new approach to the biomarker selection problem: the approach is based on the application of several competing feature ranking procedures and compute a consensus list of features based on their outcomes. The method was validated on two proteomic datasets for the diagnosis of ovarian and prostate cancer. The proposed methodology can improve the classification results and at the same time provide a unified biomarker list for further biological examinations and interpretation. 
Han [118] applied nonnegative PCA for mass spectral serum profiles and biomarker discovery. Nonnegative PCA is an extension of PCA [118] where nonnegativity constraints are imposed to the loadings. This alternative is useful when PCA, providing both positive and negative loadings, makes the interpretation of the loadings quite difficult, i.e. when positive defined signals as spectra or mass signals are investigated. The author addresses the main drawback of PCA, i.e. its global feature selection mechanism that prevents it from capturing local features. In this study, the author developed a nonnegative PCA algorithm and present a nonnegative PCA based SVM algorithm with sparse coding to conduct a highperformance proteomic pattern classification. 
Purohit et al. [41] developed a procedure for the identification of serum candidate biomarker of Type 1 diabetes (T1D) by SELDITOF and model averaging. 146 protein/peptide peaks were identified as significantly changing over a total of 581 peaks discovered. The data were split for the first replicate into training and test sets. Normal kernel discriminant analysis was then used to obtain random sets of three peaks (model), and each of 200000 models of three peaks was evaluated using LOO crossvalidation. Models with LOO crossvalidation error rates >25% were discarded, and the predictions of the remaining models were averaged using plurality voting. The resulting set of models was then evaluated on the test set. The identified models were then applied to the dataset for the second replicate. T1D and control samples were classified with 88.9% Sp_{g} and 90.0% S_{g}, while 82.8% Sp_{g} and 76.2% S_{g} were reached on the test set. 
Other applications (Table 3 for details) regard: a protein biomarker profile in tear fluid for breast cancer patients by artificial neural networks on SELDITOF MS [42]; a comparative urine analysis by liquid chromatographymass spectrometry and PCA coupled to the Nearest Shrunken Centroid (NSC) algorithm [46]; the correlation of GCTOFMS based metabolite profiling and LCMSbased protein profiling to improve pattern recognition for multiple biomarker selection, by PCA and Independent Component Analysis (ICA) [47]; the extraction of reliable protein signal profiles from MALDITOF spectra by ICA [24]; the identification of a liver cirrhosis signature in plasma for predicting hepatocellular carcinoma risk by PCA, DLDA and 3Nearest Neighbors (3NN) [119]; a high performance profilebiomarker diagnosis for mass spectral profiles by embedding multiresolution ICA in LDA and SVM [120]; the application of multivariate adaptive regression splines analysis to predict biomarkers of spontaneous preterm birth [121]; the urine proteomic profiling of pediatric nephrotic syndrome by a genetic algorithm search in the principal component space [40]. 
For what regards the development of software tools for multivariate data treatment, Fan et al. [122] developed digger, a graphical user interface R package for analyzing 2DDIGE data by different multivariate tools. Akella et al. [48] instead developed CLUETIPS (Clustering Using Euclidean distance in Tanimoto InterPoint Space), a clustering method for pattern analysis of LCMS Data. In CLUETIPS, an intersample distance feature map is generated from filtered, aligned and binarized raw LCMS data by applying the Tanimoto distance metric to obtain normalized similarity scores between all sample pairs for each m/z value. Clustering and visualization methods for the intersample distance map were developed to analyze datasets for differences at the sample level as well as the individual m/z level. CLUETIPS can also be used as a tool in assessing the quality of LCMS runs. It was applied to LCMS data obtained from plasma samples collected at various time points and treatment conditions from immunosuppressed mice implanted with MCF7 human breast cancer cells. CLUETIPS successfully detected the differences/similarities in samples at various time points taken during the progression of tumor, and also recognized differences/similarities in samples representing various treatment conditions. 
Comparison of Different Methods 
Some interesting papers have recently appeared focused on the comparison of different multivariate methods: these studies are reported here together with the main findings to provide information on the performance of different statistical tools when they are applied to the same case study. 
Brasier et al. [123] examined physiological data from 1048 subjects to identify 4 quantitative intermediate phenotypes asthma. Four different statistical machine learning methods were evaluated to predict each intermediate phenotype using cytokine measurements on a 76 subject subset. The comparison of these models using the area under the ROC curve and the overall classification accuracy indicated that logistic regression and multivariate adaptive regression splines produced the most accurate methods to predict intermediate asthma phenotypes. 
Levner [124] reported the application of feature selection and NSC classification for protein mass spectrometry; the aim of the study was to reduce data dimensionality in mass spectrometry to allow the use of standard machine learning techniques. The performance of the NSC classifier was evaluated coupled with different feature selection algorithms: Studentt test, KolmogorovSmirnov test, Ptest, sequential forward selection and a modified version of sequential backward selection. In addition, several dimensionality reduction approaches were tested: PCA and PCA coupled with LDA. Comprehensive experiments, conducted on five popular cancer datasets, revealed that the sequential forward selection and boosted feature selection algorithms produced the most consistent results across all data sets. 
Guo and Balasubramanian [125] performed the comparative evaluation of classifiers in the presence of statistical interactions between features in high dimensional data settings. A central challenge in biomedical investigations involves the estimation of an optimal prediction algorithm to distinguish between different disease phenotypes: these analyses are hampered by features that exhibit statistical interactions. The authors compared the performance of 4 classifiers (KNN, Prediction Analysis for Microarrays  PAM, RF and SVM) in settings involving high dimensional datasets including statistically interacting feature subsets. Their performance was evaluated under varying sample size, levels of S/N ratio and strength of statistical interactions among features. Simulation studies revealed that the classifier PAM had the highest classification accuracy in the absence of noise, statistical interactions and when feature distributions were multivariate gaussian within each class. In the presence of statistical interactions, modest effect sizes and the absence of noise, SVM achieved the best performance followed closely by RF. RF was optimal in settings that included both significant levels of high dimensional noise features and statistical interactions between biomarker pairs. 
Christin et al. [126] compared different feature selection methods for biomarker discovery in clinical proteomics. Six feature selection methods for LCMSbased proteomics and metabolomics biomarker discovery were compared: t test, MannWhitneyWilcoxon test (MWW test), NSC, linear SVMrecursive features elimination (SVMRFE), PCADA and PLSDA. The methods were tested using human urine and porcine cerebrospinal fluid samples that were spiked with a range of peptides at different concentration levels. The ideal feature selection method should select the complete list of discriminating features that are related to the spiked peptides without selecting unrelated features. The performance was assessed using the harmonic mean of the recall and the precision (fscore) and the geometric mean of the recall and the true negative rate (gscore). The univariate t and MWW tests with multiple testing corrections are not applicable to data sets with small sample sizes (n=6), but their performance improves markedly with increasing sample size up to a point (n>12) at which they outperform the other methods. PCADA and PLSDA select small feature sets with high precision but miss many true positive features related to the spiked peptides. NSC strikes a reasonable compromise between recall and precision for all data sets independent of spiking level and number of samples. Linear SVMRFE performs poorly for selecting features related to the spiked compounds, even though the classification error is relatively low. 
Ongay et al. [127] performed the statistical evaluation of CZEUV and CZEESIMS data of intact alpha1acid glycoprotein isoforms for their use as potential biomarkers in bladder cancer. Samples from 16 individuals (8 healthy, 8 bladder cancer) were analyzed. The analytical data were evaluated employing different statistical techniques: ANOVA, PCA, LDA and PLSDA. Statistically significant differences between the two groups of study were observed. The best results were obtained by LDA that showed a NER=93.75%. 
Conclusion 
This review is aimed to present the most recent applications of multivariate statistical tools in proteomics for the identification of biomarkers. The most recent applications present in literature were presented separately for the different multivariate methods adopted together to the theoretical bases of each statistical method. A quite wide range of different statistical methods are exploited in literature for the identification of biomarkers in proteomics, providing sound results. In general, multivariate methods should always be preferred to univariate approaches to provide a pool of markers highlighting synergistic and antagonistic effects. The biological effect played by a particular factor (a pathology, a drug, a polluting effect, a ripening effect etc) is usually the result of a series of different mechanisms either independent from each other or showing relevant interactions. Multivariate procedures are able to highlight these relationships avoiding the neglection of relevant information. 
It is however important to avoid the identification of false positives, mostly due to chance correlations: this risk greatly increases when few information is available (i.e. a small number of cases investigated), as is often the case in proteomic studies. This problem can be partially solved by a sound experimental design and sample collection; each study should be carefully designed from a statistical point of view before being performed in order to include all possible sources of biological variation. In the cases when the poor availability of samples cannot be solved, it is very important to apply mathematical tools to validate the models built, thus evaluating the predictive ability of the models: in these cases crossvalidation or the use of simulation algorithms is mandatory to identify only statistically significant markers. Another way to face this problem is the use of multivatiate methods based on projection to latent structures (e.g. based on PCA and PLS approaches), able to provide a substantial dimensionality reduction by considering few latent variables or principal components rather than a large number of original variables. This approach makes also possible the application of other classification tools as LDA to problems where a smaller number of samples than of variables is present: in such cases in fact LDA cannot be applied unless a variable selection procedure is applied providing a maximum number of discriminant variables equal to the number of samples available. 
The identification of biomarkers represents a balance between the achievement of parsimonious models with the best predictive ability and the necessity of obtaining the maximum amount of information about the effect investigated: it is the authors’ opinion that the future perspective in biomarkers identification has to be searched for in the exhaustive search for potential markers. Complex effects as pathologies, pollution effects, drug effects etc, acting on a wide range of individuals characterized by a large biological variability cannot in fact realistically reflect in a restricted panel of biomarkers. We think that the future will rely on highthroughput techniques that are able to provide great amount of information that can be coupled together to identify exhaustive panels of markers, improving the predictive performance of the final models in terms of specificity and sensitivity: to this respect the use of sound and reliable multivariate tools is particularly important to obtain reliable results. 
References 

Table 1  Table 2  Table 3 
Figure 1  Figure 2  Figure 3  Figure 4 
Figure 5  Figure 6  Figure 7  Figure 8 