Difference between revisions of "User:Soon-Heum Ko (NSC)"
(34 intermediate revisions by one other user not shown) | |||
Line 7: | Line 7: | ||
|snic ae financing=100 | |snic ae financing=100 | ||
|other ae financing= | |other ae financing= | ||
− | |financing=SNIC | + | |financing=SNIC (30%) and PRACE (70%) |
− | |general activities= | + | |general activities=PRACE European Project, SMHI HYPRE Parallelisation, SNIC Parallelisation Support |
|other activities= | |other activities= | ||
|image=Jeff.jpg | |image=Jeff.jpg | ||
− | |office=National Supercomputing Center; Room 181, House G; Linköping University; SE-581 83 Linköping, Sweden | + | <!--|office=National Supercomputing Center; Room 181, House G; Linköping University; SE-581 83 Linköping, Sweden |
− | |phone=(Office) +46 (0)13 82 25 11;(Mobile) +46 (0)734 61 31 75 | + | |phone=(Office) +46 (0)13 82 25 11;(Mobile) +46 (0)734 61 31 75--> |
+ | |start date= | ||
+ | |end date= | ||
+ | |is active=no | ||
}} | }} | ||
Line 18: | Line 21: | ||
* If you feel troubled in pronouncing Soon-Heum, then just call me Jeff | * If you feel troubled in pronouncing Soon-Heum, then just call me Jeff | ||
* Work at NSC since 2011 | * Work at NSC since 2011 | ||
− | * Ph.D. in 2008; Two years of postdoctoral experience in computational science (Code porting to scientific frameworks, hybrid CFD-MD simulation with scheduling of coupled applications) | + | * Ph.D. in 2008; Two years of postdoctoral experience in computational science (Code porting to scientific frameworks, hybrid CFD-MD simulation with scheduling of coupled applications) |
− | * | + | * Details on experiences/researches and biography can be found [http://www.nsc.liu.se/~sko/ here] |
− | |||
== Expertise == | == Expertise == | ||
− | * [[expertise::MPI]] | + | * [[expertise::MPI]] / [[expertise::OpenMP]] |
− | * [[expertise::Fortran]] | + | * [[expertise::Fortran]] / [[expertise::C]] / [[expertise::C++]] |
+ | * [[expertise::Edge]] | ||
+ | * [[expertise::DDT]] | ||
+ | * [[expertise::Scalasca]] / [[expertise::VTune Amplifier]] | ||
+ | |||
+ | == Projects == | ||
+ | * [[project::Essense Code Optimisation]] | ||
+ | |||
+ | == Old Projects == | ||
+ | Ended in 2012 | ||
+ | * [[project::LES_Code_Parallelization]] NSC-promoted code parallelization support. A collaboration with Prof. L. Davidson from Charmers University on parallelizing his LES code. | ||
+ | * [[project::Performance_Analysis_of_ad_OSS_Program]] Performance Analysis for ad_OSS Program. The work performs profiling and analysis of an in-house water molecule modeling code, called 'ad_OSS'. Work has been achieved for the consultation of Prof. Lars Ojamae @ LiU. | ||
+ | * [[project::Synthetic_Benchmark_on_Curie]] Synthetic benchmark analysis on Curie Tier-0 system. A 1IP task in the [https://snicdocs.nsc.liu.se/wiki/PRACE PRACE] project, which objects to develop the European HPC Ecosystem. | ||
+ | * [[project::NSC_GPU_and_Accelerator_Pilot]] GPU/Accelerator Pilot Project at NSC GPU/IntelMIC system construction and the programming model test. | ||
+ | Ended in 2013 | ||
+ | * [[project::HYPE Code Parallelisation]] Performance analysis of SMHI's HYPE code which solves the ground flow for pollution analysis | ||
+ | * [[project::Parallel I/O Implementation on the Multiple Sequence Alignment Software ClustalW-MPI]] Designing and implementing parallel I/O interface for a faster I/O operation on massive-sized sequence string dataset running on thousands of cores | ||
+ | * [[project::Performance Benchmark of NEMO Oceanography Code]] Compilation-level tuning and performance benchmarking of NEMO oceanography code on CURIE PRACE resource | ||
+ | * [[project::Enabling Xnavis for Massively Parallel Simulations of Wind Farms]] Implementation of MPI-I/O for reading/writing stuructured multiblock CFD dataset | ||
+ | Ended in 2014 | ||
+ | * [[project::Computer-Aided Drug Design]] Enabling LSDALTON's DFT method (enabling a real 64-bit environment) for simulating large molecule of biological interest | ||
+ | * [[project::Dalton CPP-LR parallelization]] Parallelisation of one branch of DALTON code | ||
Line 30: | Line 53: | ||
{{#set:project=[[PRACE]]}} | {{#set:project=[[PRACE]]}} | ||
{{#set:project=[[Synthetic_Benchmark_on_Curie]]}} | {{#set:project=[[Synthetic_Benchmark_on_Curie]]}} | ||
+ | {{#set:project=[[LES_Code_Parallelization]]}} | ||
+ | {{#set:project=[[NSC_GPU_and_Accelerator_Pilot]]}} |
Latest revision as of 09:11, 15 February 2017
Soon-Heum Ko (NSC)
Application expert in Computational fluid dynamics, 100% full time equivalent, financed by SNIC (30%) and PRACE (70%)
|
Contents
Quick facts
- If you feel troubled in pronouncing Soon-Heum, then just call me Jeff
- Work at NSC since 2011
- Ph.D. in 2008; Two years of postdoctoral experience in computational science (Code porting to scientific frameworks, hybrid CFD-MD simulation with scheduling of coupled applications)
- Details on experiences/researches and biography can be found here
Expertise
Projects
Old Projects
Ended in 2012
- LES_Code_Parallelization NSC-promoted code parallelization support. A collaboration with Prof. L. Davidson from Charmers University on parallelizing his LES code.
- Performance_Analysis_of_ad_OSS_Program Performance Analysis for ad_OSS Program. The work performs profiling and analysis of an in-house water molecule modeling code, called 'ad_OSS'. Work has been achieved for the consultation of Prof. Lars Ojamae @ LiU.
- Synthetic_Benchmark_on_Curie Synthetic benchmark analysis on Curie Tier-0 system. A 1IP task in the PRACE project, which objects to develop the European HPC Ecosystem.
- NSC_GPU_and_Accelerator_Pilot GPU/Accelerator Pilot Project at NSC GPU/IntelMIC system construction and the programming model test.
Ended in 2013
- HYPE Code Parallelisation Performance analysis of SMHI's HYPE code which solves the ground flow for pollution analysis
- Parallel I/O Implementation on the Multiple Sequence Alignment Software ClustalW-MPI Designing and implementing parallel I/O interface for a faster I/O operation on massive-sized sequence string dataset running on thousands of cores
- Performance Benchmark of NEMO Oceanography Code Compilation-level tuning and performance benchmarking of NEMO oceanography code on CURIE PRACE resource
- Enabling Xnavis for Massively Parallel Simulations of Wind Farms Implementation of MPI-I/O for reading/writing stuructured multiblock CFD dataset
Ended in 2014
- Computer-Aided Drug Design Enabling LSDALTON's DFT method (enabling a real 64-bit environment) for simulating large molecule of biological interest
- Dalton CPP-LR parallelization Parallelisation of one branch of DALTON code