[machine translated from usage.txt - please revise and update] Environment PR (Pattern Recognition) i.pr.training Graphical interface for extracting data from training raster maps GRASS. Given a list of maps (parameter map), where the first Viewing will be used if not set the parameter vismap, you select by clicking on the map Examples of training that will dimension rows cols (Parameters and rows cols respectively). The examples selected Class will be assigned to input by choice (parameter Class), which must be a positive integer progressive. A An alternative procedure (and not graphic) and to give input to a program files sites GRASS (parameter sitefile) and the program Examples will create training at the same sites. The class will be assigned to the examples described in the file sites. The output of the program and a series of maps of raster Cols dimension rows whose name will be the type xy.z Ye where the name of the raster maps from which was extracted, Z while a meter x will be the name of a file that will created (parameter training) and that will be generated as the The following calls: EXAMPLE: i.prtraining map = ammcom, genesis rows = 11 cols = 11 training class AAA = = 1 EXAMPLE: i.prtraining map = ammcom, genesis rows = 11 cols = 11 training class AAA = = 2 FILE: AAA * Help i.pr.features Process data and extracts training features. Given a training file Input (paretro training), which contains the name of raster maps Relating to different layers allows eleborare of normalizing Map of selected layers (normalize parameter) to calculate the Average (mean parameter), the variance (parameter variance). In addition to Each layer selected allows to calculate the main components (Parameter princomp), the main component model can Be calculated on all data (default) or only on data Selected classes (parameter classpc). Whether you Standardize features extracted (parameter standardize); this Parameter is not linked to the number of layers in files But training and linked to the number of features previously Calculated. The output of the program and a file (parameter features) such as that generated by the following call: EXAMPLE: i.prfeatures training features = = AAA BBB normalize = 1 mean = 1.2 FILE BBB: If none opearazione is selected, the features extracted will be the same. The program i.prfeatures can perform the same operation Data not GRASS, or on a table of data. The format of the data Must be like the following: FILE AAA: In this case the data are interpreted as a single layer (so for example if you sect mean = 1, the extracted feature will be the average of data). The output will be as follows: EXAMPLE: i.prfeatures training features = = AAA BBB mean = 1 FILE: BBB * Help i.pr.statistics Calculate statistics features. Given a file input features (Parameter features), and calculates kolgomorov-smirnov test for ttest Each class features of each. If the features are present Main components, calculates the variance explained by them. If there Most models are main components (for more layers) The analysis is performed on a single layer (layer parameter). The Parameter npc serves to limit the analissi the first npc components Princiapli. i.pr.model Create a classification model from file features. The type Model design is chosen setting of a flag g (gaussian mixture) n (nearest Neighbor) t (classification trees) s (support vector machines). We Then the general parameters such as file features (parameter Features), the name of the file containing the model (Parameter model), the name of a file features addizioanle on For calculating the values of prediction during training (Parameter validation), the name of a file features addizioanle on For calculating the values of prediction in addition to training in September (Parameter test) and the number of principal components to be used In the model (parameter npc) if they are in features. Then there are specific model parameters. None the gaussian Mixture. The number of nearest neighbors for neighbor (parameter Nnk), even if this parameter is used only for the assessment of In model when the model itself are the same. For Classification trees can decide whether to make a single trees Node (stamps) or no (parameter treestamps), or decide how many Data must contain at least one node to be splitted (parameter Treeminsize). Furthermore, the parameter treecosts allows Unbalancing classes (style rpart). For support vector machines The user must set the type of kernel (parameter svmkernel, It can take the values linear gaussian and 2pbk) and the scale of the kernel If both Gaussian (parameter (svmkp), the parameter Regularization (svmC), the convergence svmeps, Svmtolerance and svmmaxloops. It suggereisce not change The first two while the maximum number of loops ottimizazione can Be amended (with care). For support vector machines are It can decide to calculate an estimate of leave-one-out model error Using paramtro svml1o. The output of the procedure and Files containing the model selected. A standard output will be Describing the performance of the model on the training set and optionally On a test in September. For support vector machines also optionally Estimated cross-validation. For models classification trees and Support vecor machines are also available combinations of models (Bagging and boosting). The number of models and choice (Parameters bagging and boosting respectively). For the boosting and Also available version cost-sensitive, whose parameter and Costboosting. If you use boosting, and may be obtained dynamic Weight boosting using parameter weightsboosting. Always For boosting the trees and can be developed (and all Experimental) soft version of the model with parameter Softmarginboosting. A version cost-sensitive for support Single vector machine and accomplished using the parameter Svmcost. It can also use a version regularized dell'algoritmo AdaBoost specifying the number of intervals (reg parameter) where Divide the misclassification ratio (the number of Training in September generated), in which case the validation set and indispensabile, Alternatively it is possible to manually select the value of misclassification Ratio above which the samples are removed from training in September (Parameter misclassratio). Some models are required for classification Binary according to the following schedule: Gaussian mixture: multiclasse Nearest neighbor: multiclasse Classification trees: multiclasse Support vector machines: binary Bagging classification trees: multiclasse Boosting classification trees: binary Bagging support vector machines: binary Boosting support vector machines: binary * Help * i.pr.classify Goal GRASS raster maps on the basis of a model Classification. Given a number of raster maps GRASS input (Parametr inputmap) in which the order must be the same used For extracting data training (!!!!), and a model Classification (parameter model), produced a map raster GRASS containing the result of classifcazione. Map values by type of model: Gaussian mixture binary probability post class most likely multiplied by the same class Gaussian mixture multiclasse: class most likely Nearest neighbor binary data proportion of the class most likely multiplied by the same class Nearest neighbor multiclasse: class most likely Classification trees binary data proportion of the class most likely in the terminal node multiplied by the same class Classification trees multiclasse: class most likely Support vector machines binary output of the support vector machine itself Bagging classification trees multiclasse: class most likely Bagging classification trees binary weighted sum of models Boosting classification trees (binary): Total weighing models Bagging support vector machines (binary): Total weighing models Boosting support vector machines (binary): Total weighing models * Help * i.pr.subsets Features Create files for experiments starting from a file features (Optional features) implementing cross-validation (c flag) or bootstrap Resampling (b flag). The option nsets sect number bootstap Subsets or the number of folder for cross-validation. The seed option allows you to create different sets. * Help * i.pr.featuresadditional Given a file features (optional features) apply the rules Those features have created a list of training files (option Training) and by creating a new file these features (option Featuresout). * Help