Using Apache-Spark for cosmology Apache-Spark is a big-data framework for working on large distributed datasets. Although widely used in the industry, it remains rather confidential in the scientific community. The goal of this seminar is to introduce this framework to newcomers and show that the technology is today mature enough to be used without excessive programming skills also by astronomers or cosmologists to perform simple analyses over large datasets as those expected by the next generation galactic surveys. After a pedagogical introduction to Spark and distributed computing I will develop a simple yet powerful use-case based on analyzing interactively some properties of 6 billions of galaxies as generated by a fast simulation representing 10 years of LSST data. I will then present some recent developments related to exploring large scale catalogs focusing on their interactive visualization. Although the discipline has rather taken the route of high performance computing (HPC), I will show finally that there is no antagonism between this approach and the HPC one. It is even a very exciting perspective to work on combining the best of the possible two worlds which is one of the goal of the AtroLab organization (https://astrolabsoftware.github.io/) that is set to assemble all scientific contributions in the field.