Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
en:services:application_services:high_performance_computing:hail [2019/04/09 10:38]
tehlers [Submitting Spark Applications]
en:services:application_services:high_performance_computing:hail [2021/04/22 15:03]
mboden removed
Line 35: Line 35:
 A Spark cluster to be used with Scala from the [[https://spark.apache.org/docs/latest/quick-start.html|interactive console]] can be spawned in a similar fashion, except we start an interactive LSF job and use the wrapper script ''lsf-spark-shell.py'' instead: A Spark cluster to be used with Scala from the [[https://spark.apache.org/docs/latest/quick-start.html|interactive console]] can be spawned in a similar fashion, except we start an interactive LSF job and use the wrapper script ''lsf-spark-shell.py'' instead:
 <code> <code>
-bsub -int -4 -R span[ptile=1] -01:00 -ISs lsf-spark-shell.sh+srun -int -4 --ntasks-per-node=20 -01:00:00 lsf-spark-shell.sh
 </code> </code>
 ===== Running Hail ===== ===== Running Hail =====
Line 71: Line 71:
 An LSF job running the ''pyspark''-based console for Hail can then be submitted as follows: An LSF job running the ''pyspark''-based console for Hail can then be submitted as follows:
 <code> <code>
-bsub -int -4 -R span[ptile=1] -01:00 -ISs lsf-pyspark-hail.sh+srun -int -4 --ntasks-per-node=20 -01:00:00 lsf-pyspark-hail.sh
 </code> </code>
 Once the console is running, initialize hail with the global Spark context ''sc'' in the following way: Once the console is running, initialize hail with the global Spark context ''sc'' in the following way: