您好,登錄后才能下訂單哦!
這期內容當中小編將會給大家帶來有關使用Spark怎么實現一個隨機森林,文章內容豐富且以專業的角度為大家分析和敘述,閱讀完這篇文章希望大家可以有所收獲。
具體內容如下
public class RandomForestClassficationTest extends TestCase implements Serializable { /** * */ private static final long serialVersionUID = 7802523720751354318L; class PredictResult implements Serializable{ /** * */ private static final long serialVersionUID = -168308887976477219L; double label; double prediction; public PredictResult(double label,double prediction){ this.label = label; this.prediction = prediction; } @Override public String toString(){ return this.label + " : " + this.prediction ; } } public void test_randomForest() throws JAXBException{ SparkConf sparkConf = new SparkConf(); sparkConf.setAppName("RandomForest"); sparkConf.setMaster("local"); SparkContext sc = new SparkContext(sparkConf); String dataPath = RandomForestClassficationTest.class.getResource("/").getPath() + "/sample_libsvm_data.txt"; RDD dataSet = MLUtils.loadLibSVMFile(sc, dataPath); RDD[] rddList = dataSet.randomSplit(new double[]{0.7,0.3},1); RDD trainingData = rddList[0]; RDD testData = rddList[1]; ClassTag labelPointClassTag = trainingData.elementClassTag(); JavaRDD trainingJavaData = new JavaRDD(trainingData,labelPointClassTag); int numClasses = 2; Map categoricalFeatureInfos = new HashMap(); int numTrees = 3; String featureSubsetStrategy = "auto"; String impurity = "gini"; int maxDepth = 4; int maxBins = 32; /** * 1 numClasses分類個數為2 * 2 numTrees 表示的是隨機森林中樹的個數 * 3 featureSubsetStrategy * 4 */ final RandomForestModel model = RandomForest.trainClassifier(trainingJavaData, numClasses, categoricalFeatureInfos, numTrees, featureSubsetStrategy, impurity, maxDepth, maxBins, 1); JavaRDD testJavaData = new JavaRDD(testData,testData.elementClassTag()); JavaRDD predictRddResult = testJavaData.map(new Function(){ /** * */ private static final long serialVersionUID = 1L; public PredictResult call(LabeledPoint point) throws Exception { // TODO Auto-generated method stub double pointLabel = point.label(); double prediction = model.predict(point.features()); PredictResult result = new PredictResult(pointLabel,prediction); return result; } }); List predictResultList = predictRddResult.collect(); for(PredictResult result:predictResultList){ System.out.println(result.toString()); } System.out.println(model.toDebugString()); } }
得到的隨機森林的展示結果如下:
TreeEnsembleModel classifier with 3 trees Tree 0: If (feature 435 <= 0.0) If (feature 516 <= 0.0) Predict: 0.0 Else (feature 516 > 0.0) Predict: 1.0 Else (feature 435 > 0.0) Predict: 1.0 Tree 1: If (feature 512 <= 0.0) Predict: 1.0 Else (feature 512 > 0.0) Predict: 0.0 Tree 2: If (feature 377 <= 1.0) Predict: 0.0 Else (feature 377 > 1.0) If (feature 455 <= 0.0) Predict: 1.0 Else (feature 455 > 0.0) Predict: 0.0
上述就是小編為大家分享的使用Spark怎么實現一個隨機森林了,如果剛好有類似的疑惑,不妨參照上述分析進行理解。如果想知道更多相關知識,歡迎關注億速云行業資訊頻道。
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。