Courses
Programming-Language-based approaches for parallel computing and auto-tuning
Advanced seminar (WS 16) - Organisation
News
- 20.10.2016 Initial meeting and assignment of topics at 13:00 in room 2101.
- 30.09.2016 Enrollment is now enabled in jExam
Content
The seminar takes a look at various ways to create performant parallel programs. To achieve optimal performance with different input data, static and dynamic auto-tuning approaches can be utilized.
There exist different approaches for writing efficient parallel code. The most straightforward solution, writing explicitly parallel code with the means provided by the operating system or the accellerator drivers (e.g. pthreads), has several severe disadvantages: not only is the resulting code highly platform specific, but it is also very hard to modify in order to optimise performance (or other non-functional properties like energy efficiency). Therefore, the most common approach is the use of pragma languages in combination with the corresponding compiler extensions and runtimes. Well-known examples for pragma languages are OpenMP and OpenACC.
While these approaches are supported by established compilers, they do not offer means to the programmers to automatically optimise their code.
Therefore, there exist higher-level pragma languages and language extensions that support a (semi-) automatic optimization and auto-tuning.
This seminar will analyse and evaluate such approaches. Each approach should be surveyed, tested and categorized according to given criteria.
There exist different approaches for writing efficient parallel code. The most straightforward solution, writing explicitly parallel code with the means provided by the operating system or the accellerator drivers (e.g. pthreads), has several severe disadvantages: not only is the resulting code highly platform specific, but it is also very hard to modify in order to optimise performance (or other non-functional properties like energy efficiency). Therefore, the most common approach is the use of pragma languages in combination with the corresponding compiler extensions and runtimes. Well-known examples for pragma languages are OpenMP and OpenACC.
While these approaches are supported by established compilers, they do not offer means to the programmers to automatically optimise their code.
Therefore, there exist higher-level pragma languages and language extensions that support a (semi-) automatic optimization and auto-tuning.
This seminar will analyse and evaluate such approaches. Each approach should be surveyed, tested and categorized according to given criteria.
Topics
The topics can be put into three groups, each of which is to be investigated by one or more students:
- Parallelization and auto-tuning with extensible compilers.
- The ROSE compiler and its use for paralleization and auto-tuning
- Semi-automatic parallelization techniques
- The Polyhedra model
- Parallelization DSLs and runtimes
- OmpSS and other high-level pragma languages
- LARA and aspect-oriented parallelization approaches
- The Insieme compiler infrastructure and succeeding works
Organisation
Verantwortliche: Dipl.-Inf. Johannes Mey, Dr.-Ing. Sebastian Götz
Written Paper: 5 - max. 8 pages (LaTeX LNCS Style)
Talk/Presentation: 30 min. + short demonstration
The main seminar will be organized in blocks. We plan for 2-4 blocks. In a first meeting the appointments will be negotiated.
To successfully pass the main seminar you have to write a paper and give a presentation as stated above.
Written Paper: 5 - max. 8 pages (LaTeX LNCS Style)
Talk/Presentation: 30 min. + short demonstration
The main seminar will be organized in blocks. We plan for 2-4 blocks. In a first meeting the appointments will be negotiated.
To successfully pass the main seminar you have to write a paper and give a presentation as stated above.
Allowances
The course can be used for the modules as specified by the department: here. Students with other exam regulations can attend the course, but cannot do the exam.