Difference between revisions of "User:Soon-Heum Ko (NSC)"
(→Expertise) |
|||
Line 11: | Line 11: | ||
|other activities= | |other activities= | ||
|image=Jeff.jpg | |image=Jeff.jpg | ||
− | |office=National Supercomputing Center; Room 181, House G; Linköping University; SE-581 83 Linköping, Sweden | + | <!--|office=National Supercomputing Center; Room 181, House G; Linköping University; SE-581 83 Linköping, Sweden |
− | |phone=(Office) +46 (0)13 82 25 11;(Mobile) +46 (0)734 61 31 75 | + | |phone=(Office) +46 (0)13 82 25 11;(Mobile) +46 (0)734 61 31 75--> |
+ | |start date= | ||
+ | |end date= | ||
+ | |is active=no | ||
}} | }} | ||
Latest revision as of 09:11, 15 February 2017
Soon-Heum Ko (NSC)
Application expert in Computational fluid dynamics, 100% full time equivalent, financed by SNIC (30%) and PRACE (70%)
|
Contents
Quick facts
- If you feel troubled in pronouncing Soon-Heum, then just call me Jeff
- Work at NSC since 2011
- Ph.D. in 2008; Two years of postdoctoral experience in computational science (Code porting to scientific frameworks, hybrid CFD-MD simulation with scheduling of coupled applications)
- Details on experiences/researches and biography can be found here
Expertise
Projects
Old Projects
Ended in 2012
- LES_Code_Parallelization NSC-promoted code parallelization support. A collaboration with Prof. L. Davidson from Charmers University on parallelizing his LES code.
- Performance_Analysis_of_ad_OSS_Program Performance Analysis for ad_OSS Program. The work performs profiling and analysis of an in-house water molecule modeling code, called 'ad_OSS'. Work has been achieved for the consultation of Prof. Lars Ojamae @ LiU.
- Synthetic_Benchmark_on_Curie Synthetic benchmark analysis on Curie Tier-0 system. A 1IP task in the PRACE project, which objects to develop the European HPC Ecosystem.
- NSC_GPU_and_Accelerator_Pilot GPU/Accelerator Pilot Project at NSC GPU/IntelMIC system construction and the programming model test.
Ended in 2013
- HYPE Code Parallelisation Performance analysis of SMHI's HYPE code which solves the ground flow for pollution analysis
- Parallel I/O Implementation on the Multiple Sequence Alignment Software ClustalW-MPI Designing and implementing parallel I/O interface for a faster I/O operation on massive-sized sequence string dataset running on thousands of cores
- Performance Benchmark of NEMO Oceanography Code Compilation-level tuning and performance benchmarking of NEMO oceanography code on CURIE PRACE resource
- Enabling Xnavis for Massively Parallel Simulations of Wind Farms Implementation of MPI-I/O for reading/writing stuructured multiblock CFD dataset
Ended in 2014
- Computer-Aided Drug Design Enabling LSDALTON's DFT method (enabling a real 64-bit environment) for simulating large molecule of biological interest
- Dalton CPP-LR parallelization Parallelisation of one branch of DALTON code