spark0.9.1安装
解压缩修改目录用户权限,用户hadoop登录修改配置文件//slaveshadoop1hadoop2hadoop3//spark-env.shexport SPARK_MASTER_IP=hadoop1export SPARK_MASTER_PORT=7077export SPARK_WORKER_CORES=1export SPARK_WORKER_INSTANCES=1export SPARK_WORKER_MEMORY=3gscp到个节点********************************************************************************************************shark安装源码下载,解压缩源码编译SHARK_HADOOP_VERSION=2.2.0 SHARK_YARN=true sbt/sbt assembly修改配置文件//shark-env.shexport SPARK_MEM=2g# (Required) Set the master program's memoryexport SHARK_MASTER_MEM=1g# (Optional) Specify the location of Hive's configuration directory. By default,# Shark run scripts will point it to $SHARK_HOME/confexport HIVE_CONF_DIR="/app/hadoop/hive013/conf"# For running Shark in distributed mode, set the following:export HADOOP_HOME="/app/hadoop/hadoop220"export SPARK_HOME="/app/hadoop/spark091"export MASTER="spark://hadoop1:7077"scp到个节点********************************************************************************************************启动hive metasotre servicenohup bin/hive --service metastore > metastore.log 2>&1 &启动sharkbin/shark./bin/shark –service sharkserver <port>./bin/shark -h <server-host> -p <server-port>