Intel® Modern Code


The Intel Modern Code community is an initiative to disseminating knowledge about how to re-architect and optimize, for parallelism, in order to exploit the full potential of today and tomorrow’s computers and supercomputers. Such community is composed by experts which provides libraries, support and training on modern code techniques.

NCC has joined the Modern Code community as a Modern Code Partner (MCP), since may of 2015 and started offering trainings and code modernization consultancy.

If you want to offer a training about software modernization on your site or want to be aware of upcoming trainings contact us.

Silvio Luiz Stanzani is Research Associate at UNESP Center for Scientific Computing (Núcleo de Computação Científica da Universidade Estadual Paulista – NCC/UNESP), working on the Intel Modern Code Partner project. He holds a Ph.D. from Escola Politécnica da Universidade de São Paulo (EPUSP), and a Master from Universidade Católica de Santos (Unisantos). He has experience in software development focused on parallel programming and distributed computing since 2005.

Raphael Mendes de Oliveira Cóbe has a Ph.D. on Computer Science from the Mathematics and Statistics Institute of the University of São Paulo with emphasis on Artificial Intelligence. He accounts for over 10 years of experience on software development. Nowadays he works as Research Associate at UNESP Center for Scientific Computing, working mainly on projects related to cloud, grid computing and big data.

Rogério Luiz Iope holds a B.S. in Applied Physics (minor in Instrumentation & Microelectronics) and a Ph.D. in Electrical Engineering (major in Computer Engineering) from the University of São Paulo (USP). He works with the development of high performance computing systems and distributed systems since 2001. He works as Research Associate at UNESP Center for Scientific Computing and is responsible for the development and technical coordination of new projects in partnership with industry. He is Co-PI of the “Intel Parallel Computing Center” and the “Intel Modern Code Partner” programs established at UNESP CSC in partnership with Intel.

Calebe de Paula Bianchini has been working on a variety of project in the last 15 years. Most of them are R&D projects in HPC area, such as Galileo (Petrobrás/USP) and IPCC (Intel/UNESP). He is also an associate professor at Mackenzie University leading in-house small projects, both in HPC and in software engineering. He has a PhD in computer engineering from USP/Brazil and an MSc in computer science from UFSCar/Brazil. The former was focused on HPC and grid computing and the latter was focused on software engineering and networks. Nowadays, He is optimizing a high-energy physics software using fine-grained (e.g. SIMD) and coarse-grained parallelism (e.g. threads) for Intel® Xeon Phi™ coprocessors.

Guilherme Amadio is a postdoc at SPRACE. He works on the parallelization of high energy physics software for Intel® Xeon Phi™ coprocessors. He has a PhD in aerospace engineering from University of Illinois at Urbana-Champaign. His PhD thesis work focused on simulating the microstructures of solid rocket propellants. Before going to Illinois, He studied nuclear astrophysics at the University of Tokyo, in Japan.

Jefferson Fialho Coelho Is a undergraduate student on Analysis and Systems Development at Technology College of São Paulo (Faculdade de Tecnologia de São Paulo). His research interests include: investigation of strategies for efficient use of accelerators and coprocessors on hybrid parallel architectures, performance optimization of parallel algorithms and high performance computing (HPC) environment management.

Numerical methods for biphasic flow in heterogeneous porous media in computer architectural environments mic

This research work proposes to optimize the code from a “NUMERICAL METHODS FOR BIPHASIC FLOW IN HETEROGENEOUS POROUS MEDIA” , developed by the LNCC Oil and Gas research group, for heterogeneous architecture computational environments. The present work goal is to develop a parallel version for heterogeneous multicore server architecture environment. This proposal aims to change the scheduling pattern of the threads generated by the compiler API through the Openmp standard shared memory programming model in order to reduce the synchronization time between them and the increase in performance.

Research Team:

Carla Oshtoff

Institution:  Laboratório Nacional de Computação Científica (LNCC)

Increasing power efficiency on convolutional neural networks for field-programmable gate arrays

Convolutional neural networks have played a prominent role in recent years in the field of computational vision, becoming the dominant approach for almost all recognition and detection tasks. The most commonly used hardware, the GPU, provides high performance with low power consumption. Still, some applications where power efficiency is more critical than performance could benefit from other existing hardware architectures. This project proposes to use techniques for optimization of hardware processing to improve the power efficiency of convolutional neural networks in FPGAs, which are attractive due to their greater flexibility and lower power consumption when compared to GPUs. The topics to be explored in the search for a better power efficiency are sparsity of matrices, multiplications reduction and use of binary and ternary data types. The training of these networks involves performing a large number of matrix operations like addition and multiplication, which are time consuming procedures when done in conventional computers. These operations, however, are well suited for multi/many-core architectures, such as Intel Xeon and Xeon Phi, which would result in a faster and more complete training.

Research Team:

Vitor Finotti (Master Student)
Bruno de Carvalho Albertini (Advisor)

Institution: Universidade de São Paulo and Unesp / Núcleo de Computação Científica

Meta-heuristics algorithms

The contribution of meta-heuristics – in particular evolutive algorithms – in the area of optimization are extremely important, as they help finding optimized solutions for complex real-life problems, offering an important flexibility in the modelling of problems. This work proposes to present a model to be used in the optimization of the job shop schedule and allows to search both the best sequence of operations and the lots and sizes that each operation can be independently subdivided within the same order. The possibility of using alternative resources, operations with two or more resources and unavailability intervals are features presented in the model, which lends it him great robustness and applicability. Also, the execution with parallel tasks can provide better performance in the search for solutions.

Research Team:

Leandro Mengue (master Student)

Arthur Tórgo Gómez (Supervisor)

Institution: Unisinos

Optimization of complex numerical modelling applications

This project focuses on the performance analysis of a complex numerical modelling application that evaluates the coupling between geomechanics and multiphase flows. The idea is to evaluate two-phase immiscible flow in a strongly heterogeneous deformable carbonate underneath a rock salt composed by halite and anhydrite displaying creep behaviour with the viscous strain ruled by a nonlinear constitutive law of power-law type. This application is fundamental in reservoir engineering to detect and explore deeper formations. In this project, we propose a detailed performance analysis of the application using the VTune and Advisor tools from Intel and a further parallelization of the code. Our goal is to provide a more efficient code that runs on multicore processors accelerated by a Intel Xeon Phi processor. The main idea is to detect the hot spots in the code and propose parallel solutions that include the use of OpenMP and vectorization.


“Otimizando a Aplicação de Engenharia de Reservatórios Utilizando a Ferramenta Parallel Studio XE e Vetorização”

Research Team:

Leandro Pereira (Undergraduate Student)

Cristiana Bentes (Supervisor)

Institution: UERJ

Profrager Optimization

Protein Structure Prediction (PSP) is one of the most important topics in the field of bioinformatics, and several important applications in medicine (such as drug design) and biotechnology (such as the design of novel enzymes) are based on PSP methods. Profrager is a fragment library generation tool developed at the Brazilian National Laboratory for Scientific Computing (LNCC) that aims to improve the performance of PSP by generating fragment libraries in order to minimize the PSP search space. Profrager experiments can be computationally intensive and a possible approach is to rely on parallel architectures to improve scalability. Current trends on the design of parallel computing architectures are towards increasing the computational power of multi-core processor servers by aggregating many-core coprocessors or accelerators. Such hybrid architectures have the potential to speed up and improve the throughput of applications, but it is challenging to efficiently use all the processing power offered by the heterogeneous resources. The objective of this project if evaluate how to achieve high level of performance using Intel Multi Core and Intel Many Core architectures.


White Paper: Optimization of ProFrager, a Protein Structure and Function Prediction Tool

Research Team:

Silvio Luiz Stanzani (Research Associate)

Raphael Mendes de Oliveira Cóbe (Research Associate)

Rogério Luiz Iope (Research Associate)

Institution: Unesp / Núcleo de Computação Científica

Parallel Programming Marathon

This project was part of the II Regional School of High Performance Computing of Rio de Janeiro (ERAD-RJ 2016) that aimed at putting students in touch with the area and increasing their interest in HPC. The marathon gives students the opportunity to test their parallel programming skills using Xeon and Xeon Phi processors. Students are group up in teams having up to 3 people. The competition has 2 stages: a warmup, where they get familiar with the computational environment and the contest itself, where they have 3 hours to parallelize a set of applications.
Judgment is strict. In the beginning of the contest, teams receive problem descriptions and sequential (serial) solutions. Resolution involves not only the correct problem solution but also performance speedup for parallel (or distributed) version, measured according to criteria defined by committee for current contest. The winning team is the one with the greatest accumulated speedup, considering all applications.

Research Team:

Tiago Alves

Alexandre Sena

Leandro Marzulo

Institution: UERJ

Optimization of Geant

UNESP Intel Parallel Computing Center is mainly involved in R&D efforts to transform high-energy physics (HEP) software tools, in particular the simulation framework known as Geant, towards using them with modern computing architectures that support multi-threading and other parallel processing techniques to make data processing more cost effective. Geant is a toolkit for the simulation of the passage of particles through matter using Monte Carlo methods. It is one of the most important software tools for the HEP community, incorporating physics knowledge from modern particle and nuclear physics experiments and theory, and it has been designed to model all the elements associated with detector simulation: the geometry of the system, the electromagnetic fields inside the materials, the physics processes governing particle interactions, the response of sensitive detector components, the storage of events and tracks, the visualization of the detector and the particle trajectories, and the capture and analysis of simulation data at different levels of details and refinements. It is an open source project, founded in 1994, and developed and maintained by an international collaboration of around one hundred physicists and computer scientists. Fully coded in C++, it is considered both a toolkit and a framework: users can choose any of its software libraries to use within an specific application, and its functionality can be expanded through its many interface points. Geant4, the current version, is the re-engineered, object-oriented successor of Geant3, written in Fortran. Geant is in general associated with long calculation times and it is ideal for compute-bound workloads that may be well suited for execution on Intel Xeon Phi coprocessors. The plan for code performance improvements at the UNESP Parallel Computing Center includes the development of the necessary tools and metrics to evaluate the performance of multi-threaded HEP applications running on Intel Xeon Phi coprocessors. The researchers intend to test vector-coprocessor prototypes in a hybrid computing system such as the one mae available by the Intel/Unesp Modern Code project and analyze the performance of the next generation of Intel MIC architecture (Knights Landing), evaluating the redesign efforts eventually necessary for adopting the new technology. These activities are closely related to the development of Geant-V, a new generation of the Geant simulation engine, which will include massive parallelism natively.

Research Team:

Calebe Bianchini

Guilherme Amadio

Institution: Unesp / Núcleo de Computação Científica

Optimization/Modernization for PDE solvers applied to flow dinamycs (Oil & Gas)

In this project we have the objective of performing analysis for potential optimization of a Godunov-type semi discrete central scheme, for a particular hyperbolic problem implicated in porous media flow for the Intel XEON architecture, in particular, the Haswell processor family which brings a new and mode advanced instruction set to support vectorization.


White Paper: Fine-Tuning Optimization for a Numerical Method for Hyperbolic Equations Applied to a Porous Media Flow Problem with Intel® Tools

Research Team:

Frederico Cabral

Institution: Laboratório Nacional de Computação Científica (LNCC)

Accelerating Weather Forecast microphysics using Heterogeneous Parallel Computing

The objectives of this project are: to understand the complexity of Weather Forecast using traditional CPU solutions; to increase the resolutions of Weather Forecast running for Colombia climate conditions; to determine the real effort necessary to rewrite, migrate or adapt a legacy solutions written mainly on Fortran to new Massive Parallel processor architectures.

The present research tries to use heterogeneous accelerators (Gpus and Vector Processor) for doing the most higher computation process as MP (on cloud and precipitaton) on a WRF model for using Higher resolutions and thus increase the accuracy of weather forecast for Colombian conditions.
The use of accelerators on the Weather Forecast incress the computations capacity  with low cost for the Colombian Metheorological agencies as IDEAM (National Agency of Weather Forecast and Hidrology Studies).


Poster: “Efficiently Energetic Acceleration EEA-Aware For Scientific Applications of Large-Scale On Heterogeneous Architectures ACM SRC poster category”

Research Team:

Esteban Hernandez (PhD Student)

Carlos Montenegro Marin (Supervisor)

Institution: District University of Bogotá

Optimizing BRAMS for Multicore Processors

Over the last two years Jairo Panetta and Simone Shizue Tomita Lima isolated the dynamics from the remaining BRAMS code. This stand-alone code is the basis for future versions of BRAMS. A long process eliminated coding practices that may conduct to race conditions. Global scratch areas were eliminated. All procedures have explicit interface at the points of call. The intent of each procedure argument is declared. Use association is restricted to procedure interfaces. This BRAMS subset code is named isolated dynamics. Besides dynamics, it contains input, output and initialization.

The isolated dynamics is the base for OpenMP parallelism and memory hierarchy experimentation, as described on the next paragraph. But it also serves as the base for experimenting with higher order approximation of dynamics processes. The current mathematical formulation of dynamic processes uses a first order approximation of derivatives that requires a small integration time-step. The original BRAMS code hardwire ghost zone length to one, preventing exploitation of higher order approximations. The isolated dynamics was recoded to allow a user-defined ghost zone length. Coding and coupling of higher order approximations are currently under way on this version of the isolated dynamics, conducted by Saulo R. Freitas and researchers from Germany. In this work, 5th order transport schemes allied with 3rd order time integration methods will greatly enhance model accuracy, which is essential to improve the forecast of rainfall for the next generation of CPTEC/INPE operational products at cloud scales (~ km) that will run on future supercomputer systems.

Recently an MsC thesis at INPE compared two forms of exploring OpenMP parallelism on scalar advection, which is a small part of the isolated dynamics. First form was the classical parallelization of each code loop nest. Nests run through the entire MPI subdomain. Second form was tiling the horizontal MPI subdomain. Tiling changed the original 3D array of atmospheric fields to a 2D array of pointers to 3D tiles. These tiles occupy consecutive memory positions to avoid cache conflicts. Since advection on distinct tiles are mutually independent, OpenMP parallelism is trivially implemented by parallelizing the outermost loop that run through tiles, dispatching advection at each tile. While the second form of parallelism is potentially more efficient, it requires work replication since advection requires computing fluxes at cells boundaries. Fluxes at cells on tile boundaries are computed twice, once for each tile.

As expected, the speed-up of the second form was close to perfection and higher than the speed-up of the first form. But execution times of the second form were higher than the execution times of the first form, mainly on small core counts due to work replication. As core count increases, the execution times of the second form were lower than the execution times of the first form.

It is not clear at all if these results propagate to the entire dynamics. Work replication conducts to communication-avoiding algorithms and higher speed-ups, but not necessarily to lower execution times. Storing each tile at consecutive memory positions effectively uses the cache hierarchy and NUMA architecture, but optimal tile size varies with the dynamic process, due to the distinct number of atmospheric fields used by each process.

Experimentation is clearly required.

We propose two tracks of activities. First track is dedicated to the isolated dynamics. Second track is dedicated to the physics. This structure accommodates the differences of the current stage of both code packages. It allows experimentation on data structures, vectorization and parallelism on the section of the code that is ready for OpenMP parallelism (dynamics), while the elimination of potential race conditions is performed on the part of the code that is not ready for OpenMP parallelism (physics). Both tracks synchronize when physics is ready for OpenMP parallelism.

There are four activities on the isolated dynamics:

  1. Initial performance evaluation of the dynamics (M1-M3)

Performance evaluation measures how efficiently the base code uses memory hierarchy and measures vectorization ratio. The first performance evaluation will contemplate a single core. A second performance evaluation measures memory hierarchy and parallelism interference among cores using multiple MPI processes. Results of these evaluations guide the optimization effort of the next activities. Performance analysis will be based on Intel Parallel Studio tools, mostly VTune and Advisor.

  1. Experimenting coding strategies for the dynamics (M4-M6)

This activity experiments with data structures layout strategies to enhance memory hierarchy usage and experiments with forms of parallelism exploitation through OpenMP coding. The activity essentially replicates the experiment on advection to selected parts of the remaining dynamic processes. Intel Parallel Studio tools will be used to measure memory hierarchy usage, mostly VTune.

  1. Implementing selected coding strategy on the full dynamics (M7-M12)

This step applies the selected data structure and the form of OpenMP parallelism to the entire dynamics. The result of this activity is an OpenMP coded dynamics with potential improved vectorization ratio and potential enhanced memory hierarchy usage. Execution time as a function of OpenMP threads are the central measure of optimization. Intel VTune may be used to explain performance details.

  1. Final dynamics performance evaluation (M9-M12)

The performance of the final dynamics code and OpenMP scalability on a full node are measured and compared with corresponding base code performance. Execution time as a function of OpenMP threads are the final performance measure.

There are also four activities on physics modules:

  1. Select which physics modules to include (M1)

The current version of BRAMS has too many physics modules, some of them outdated and not used. This activity selects with physics modules should be included on the desired code.

  1. Insert thread safe and enhance vectorization on selected modules (M2-M12)

Visit each selected physics module and eliminate coding practices that prevent the introduction of OpenMP parallelism. We intent to use Intel Inspector in this activity.

Concerning vectorization, some physics modules receive a single atmospheric column at each invocation while other physics modules receive a set of independent atmospheric columns at each invocation. On the former case, vectorization is limited by dependencies on computations on consecutive atmospheric levels within a single column. On the latter case, the set of independent columns is the desired vectorization direction. Intel Advisor should help to find out vectorization ratio and potential candidates for improvement.

  1. Insert OpenMP on physics modules and couple with dynamics (M7-M12)

Build a driver for each physics module, introduce OpenMP on the driver and couple the driver with dynamics. Driver’s structure varies with the number of atmospheric columns that each module operates simultaneously. In any case, the driver loops over atmospheric columns, invoking the physics module at each loop iteration. If the module accepts a set of atmospheric columns at a time, the driver’s loop partitions the domain of atmospheric columns into sets. If the module accepts a single column at a time, the driver’s loop over all columns of its domain. Introducing OpenMP is trivial in both cases.

  1. Performance evaluation of the coupled model (M9-M12)

Measure performance of the coupled model on a single core. Measure OpenMP scalability on a full node. Execution time as a function of OpenMP threads is the final measure of performance. Intel VTune should help to explain details


White Paper: Tutorial para uso dos nós acelerados por Intel® Xeon® Phi™ no NCC-Unesp

Research Team:

Simone Shizue Tomita Lima
Daniel Massaru Katsurayama
Jairo Panetta

Institution: INPE

Analysis of Energy Consumption and Performance Efficiency on the Intel MIC architecture

Heterogeneous architectures composed by CPU and accelerators (GPU or Xeon Phi) are present in most supercomputers listed in the Top500 ranking (which lists the 500 most computing power machines) and, consequently, it is a trend in the future of high performance computing. The Intel Xeon Phi coprocessor is a manycore architecture based on cores with efficient energy consumption. These architectures provide local memory and connection via bus at high speed, allowing native application execution. In this new scenario, it is important to consider programming techniques and models that enable the efficient use of computing resources to improve performance and power consumption of parallel applications.

This work aims to analyze the relationship between performance and power consumption in manycore architectures considering specifically Intel MIC (Many Integrated Core) and Xeon Phi coprocessors platforms. The main approach applied is the definition of a set of performance and energy consumption counters that will be collected using micsmc when running benchmarks.

There are several benchmarks and algorithms for high-performance homogeneous architectures. For heterogeneous or hybrid computing, benchmarks are more restricted. In this work we will consider initially: 1) HPLinpack (High-Performance Linpack) to solve systems of linear equations with double precision (64-bit); 2) SGEMM and DGEMM; 3) NAS-OpenMP Parallel Benchmark, a specific version of the NAS to manycore architectures focused on architecture MIC and Xeon Phi.

Expected Results: 

The tests and assessments carried out in this work should consider not only performance but mainly the energy eficiency. To achieve this, it is important to identify and understand which factors may influence the energy consumption and their impact on performance. The results will be extracted across micsmc counter’s during benchmarks execution in different scenarios. For example, for each benchmark we will consider: CPU and One Xeon Phi; CPU and Two Xeon Phi, CPU only and Xeon Phi only.

Research Team:

Robson Gonçalves (Master Student)
Márcia Cera (Supervisor)

Institution: UniPampa

Development a parallel cellular automaton using OpenCL


Research Team:

Maelso Bruno Pacheco Nunes Pereira (Master Student)
Alisson Brito (Supervisor)

Institution: UFPB


Dataflow Resiliency and Scalability

The Trebuchet Dataflow Runtime enables programmers to write parallel code for multi and manycore architectures by describing their algorithms in terms of dataflow graphs. In this project we aim at providing experimental proof of the performance and resiliency achieved with Trebuchet. In the performance-oriented experiments, we show that applications parallelized with Trebuchet achieve excellent performance (equal or better than traditional approaches such as OpenMP) and scale in accordance with our theoretical model for scalability in dataflow graphs. The resiliency portion of the project shows that Trebuchet is also able to recover from transient faults using the Dataflow Error Recovery (DFER) model.


Paper: A Resilient Scheduler for Dataflow Execution

Research Team:

Tiago Alves
Leandro Marzulo
Felipe França

Institution: UERJ and UFRJ

Please go to the following website: Unesp Modern Code Events

Please, use the following text to acknowledge the use of computing resource from Center for Scientific Computing at the State University of São Paulo (NCC / UNESP) in your papers:


"Os autores (também) agradecem ao Núcleo de Computação Científica da Universidade Estadual Paulista (NCC/UNESP) pelo uso dos recursos computacionais do cluster heterogêneo de múltiplos núcleos. Esses recursos foram financiados parcialmente pela Intel através dos projetos "Intel Parallel Computing Center", "Modern Code Partner" e "Intel/Unesp Center of Excellence in Machine Learning."


"The authors also thank the Center for Scientific Computing at the State University of São Paulo (NCC / UNESP) for the use of manycore computing resources. Such resources were partially funded by Intel through the "Intel Parallel Computing Center "," Modern Code Partner " and "Intel / Unesp Center of Excellence in Machine Learning. "

WordPress Theme built by Shufflehound.