Skip to content

Events

Colfax/GW-IPCC Developer Training at The George Washington University

Phi Manycore Parallel Programming Workshop

Location:
GWU Science and Engineering Hall (SEH),
Lehman Auditorium, 800 22th Street NW, Washington, DC.
Across from Foggy Bottom Metro Station.

Date: 11/29/2016
Time: 8:30AM - 6:00PM

This workshop will focus on multicore programming for Intel Xeon Phi. It will be offered by Colfax and GW staff under the sponsorship of Intel. Registration is free, but space is limited and therefore your registration needs to be confirmed.

Registration: Closed

 

Hand-On Component

Colfax Developer Training is far more than a lecture – it is an experiential learning program. That is because the training contains hands-on component in two forms:

  1. The instructor will demonstrate the methods taught in the course live, on servers with the latest Intel Xeon and Intel Xeon Phi processors.
  2. Attendees will receive remote access to training servers with Intel Xeon Phi coprocessors for 1 day and a set of programming and optimization exercises.

Bring your own laptop (with wifi-capability) to take advantage of this opportunity.

 

Agenda

Registration, light breakfast (8:30 – 9:00 am)

Morning session (9:00 am – 12:00 noon)

  • Sneak Peak: What will be covered today (30 min)
  • Parallel Programming model (30 min)
    -Programming and Optimization by Example (2 hours)
    -Demonstration of a case study: direct N-body simulation
    -Intel processor architectures
    -Break (15 min)
    -Task and data parallelism
    -Memory organization
    -Programming coprocessors and clusters

Lunch (12:15 pm – 1:00 pm)

Afternoon session (1:00 pm – 4:45 pm)

  • Optimization Pointers (1 hour)
    -Scalar tuning and using Intel compilers
    -Automatic vectorization
    -Multi-threading with OpenMP
    -Optimizing cache usage and memory access
    -Communication control
  • Preparing for Intel Xeon Phi processors (30 min)
    -Compiling with AVX-512
    -Using high-bandwidth memory
    -Leveraging clustering modes
    -Coprocessor form-factor and KNL-F
    -Break (15 min)
  • Intel libraries (1 hour)
    -Intel Math Kernel Library (MKL): components, performance tuning
    -Intel Data Analytics Acceleration Library (DAAL): machine learning
  • Intel Python (1 hour)
    -Brief intro to Intel Python (where to get it, installation, etc.)
    -Discussing numpy, scipy and link with Intel MKL
    -How to get the most out of numpy and scipy