Quantcast
Channel: Intel Developer Zone Blogs
Viewing all 1751 articles
Browse latest View live

Functional Connectivity of Epileptic Brains: Preprocessing EEG Data - Week 2 Update

$
0
0

EEG data can be recorded with many different file types depending on the instrument and the institution.  The file type in this research that we will be working with is the simple text file containing EEG data.

The first thing after we import our EEG data is to apply preprocessing techniques to the data. This way we can ensure that we only keep the important information from the neural activity rather than the information generated from artifacts. The most common artifacts presented in EEG data are eye blinks and 60Hz power line interference as shown in figure below.

To detect the 60Hz power line interference, the most intuitive way is to generate the spectrogram of the signal and detect the peak in 60Hz frequency component of the plot. The simplest way to remove this artifact is to apply the notch filter removing the 60Hz frequency component of the EEG signal.

The detection of eye blinks are more difficult. The way to identify eye blink patterns from the EEG data requires experience and an advice from experts. The simple way to detect eye blinks are to match the time stamp of the video recording of the patient to the EEG data. To remove eye blink artifact, we will apply Independent Component Analysis (ICA) to the data. The ICA technique attempt to un-mix the data according to its sources.

Image source: https://www.hindawi.com/journals/isrn/2011/672353/fig1/

After applying ICA to the data that we import, we can generate plots for each of the ICA components from the EEG data as shown in the figure below. From the plot, we can select the components that we want to remove from the EEG data and zero out the components that we wish to remove. In this case, we will remove components number #IC17 and #IC18 from the data where we think that these components are artifacts. Then we reconstruct the cleaned EEG data from the remaining ICA components.

Figures below show the difference between original EEG data (left side) and the data that we applied preprocessing techniques to it (right side). We can see that the artifacts were removed and now the data is ready for our next steps.

Read Previous: Investigating Connectivity of Epileptic Brain - Week 1 Update

Continue to: Extracting Functional Connectivity - Week 3 Update


Functional Connectivity of Epileptic Brains: Extracting Functional Connectivity - Week 3 Update

$
0
0

In the previous week we applied preprocessing techniques to obtain the cleaned EEG data. This week we are introducing how to extract functional connectivity from the EEG data. Functional connectivity is defined as a study of the correlation of event occurring in the regions of the cortex. The value of functional connectivity depends on the levels of synchronization between groups of neurons. Therefore, the simplest way to calculate the functional connectivity between different electrodes of EEG data is by using cross-correlation.

Cross-correlation can be defined as the shown equation where it measures the similarity between two signals. Cross-correlation coefficients between all pairwise montages of all 19 EEG channels were computed independently. The cross-correlation coefficient as measured between any two vectors x and y with length n with a lag of m, where x and y in this case represent a vector of the EEG data of a pair of electrodes.

The computed coefficients were normalized where the auto-correlations at the zero lag will be identified as the highest value (R = 1.0). The rest of the pairwise electrodes were computed in a slightly different method, where the algorithm searches for the maximum cross-correlation coefficient over the lags (m) of ± 500ms and selects the maximum value in the range to be the strength of the functional connectivity representing each pair. When we calculate all of the pair of electrodes, we will obtain the connectivity matrix.

Now as shown in the figure below, we have converted the EEG data into connectivity matrix. The values of the elements in the matrix range between 0 and 1, where 1 indicates that there is a strong connection between the two electrodes and 0 indicates otherwise. We can see that the diagonal elements of the connectivity matrix contain elements with all 1 value. This happens because the maximum correlation between a signal and itself will always be the highest.

To make it easy to visualize, we can use this connectivity matrix and generate plots. Undirected graphs are constructed to provide a visualization result of connectivity matrices obtained from the proposed cross-correlation method. The electrodes are represented as the nodes of the plot where the edges provide the connection strength. The connection strength between the nodes or the edge is represented by colors, where dark blue indicates that there is no connection and dark red indicates the highest connectivity strength.

Read previous: Preprocessing EEG Data - Week 2 Update

Week 4 Update will be posted soon.

Intel® Software Innovator Macy Kuang: Meditation and Virtual Reality

$
0
0

As an Internet of Things pioneer, Android expert, YouTube* Channel host, founder of her own game development company, Google* Developer Expert, and Intel® Software Innovator, Macy Kuang, is very involved in the technology and innovator communities. Her most recent work involves improving an interactive meditation experience through making virtual reality more user friendly and immersive.

Tell us about your background.

I worked for a number of large organizations in software development and gaming before founding my own company, Maiomaio Games, in 2013. In my spare time, I do a lot of research on robotics. I love technology, which is why I started organizing a lot of tech events and becoming more involved in the global tech community doing workshops, presentations and lectures. I am a part of the Intel Software Innovator program, a Google Developer Expert for Wearable and IoT technologies, organizer of AndroidTO - a long-running technology conference, and the host of my own Youtube Channel Code to Create.

 What got you started in technology?

Growing up, my family was always really into technology - especially my uncle. He is a tech entrepreneur, and he lived with my family for a while when he was in College. At one point in the early ‘90s, he made an accounting software program for my mother which helped my mother be much more productive than others in the same field. He also taught me how to use computers; I particularly remember when he showed me how to use command line to open Pac-Man, and some basic software skills.

I didn’t realize working with computers would be part of my life until I studied programming as a part of a Digital Media course I was taking at Seneca College in Toronto. I had studied painting since I was 6 years old and I always thought I would become an artist when I grew up. When I discovered coding, however, I immediately loved it. The ability to create useful tools is really appealing to me.

What projects are you working on now?

Currently, I am helping build a platform for a meditation company, Unyte to improve their interactive meditation experience. Their mission is to help people with their mind and improve their mental health. I have also stayed busy working on some research projects, such as building experimental wind turbines for alternative energy sources, as well as stress-testing machines for hockey sticks.

Do you see meditation as a space that Virtual Reality will be able to find success in?

Yes, I think there is a lot of alignment for meditation products to emerge and be effective in virtual reality (VR). VR has a way of creating an extremely immersive atmosphere which is great for something like meditation where having a disruptive environment full of distractions can be an obstacle. Also, as the popularity and availability of VR grows, maybe people will be more willing to give meditation a try if it is available to them in a VR experience. 

Tell us about a technology challenge you’ve had to overcome in a project.

Developing for VR is very challenging right now because it is very new for the users. Right now we are at a point where the initial novelty excitement about VR as a platform is gone, and we realize that the users actually need a lot of guidance and training to understand how VR works. For example, the understanding of space, the comfort of turning their head, the ability to locate objects and understand the control are not things that come naturally to many people in VR.

To improve these experiences, and help people have more fulfilling VR experiences, we have added more content to our tutorials, and put forth a lot of time and effort to avoid ambiguous controls. As a result we have been seeing a lot of positive results in helping the transition into VR easier for our users.

Tell us about your experience as a woman working in technology.

I really love the technology industry. It is a very innovative, energetic and less hierarchical field than others that I have been involved with or become aware of. As an organizer of large events, I have made it a point to strive to ensure our speakers and attendees are representative of a community that we can all be a part of. The teams I work with encourage diversity and seek to showcase talent from every gender, race and culture.

I think the best way for me to help women in the industry is by holding myself to a high standard and continuing to grow and develop myself, and continue participating in and supporting the community. Being fortunate enough to have received some recognition by the industry, hopefully others will be encouraged. If I can do it, then they can as well.

What trends do you see happening in technology in the near future?

My outlook consists of a future that will be more automated; where we will become more comfortable living with robots. As most developed countries are aging it seems quite likely that we will benefit from robots joining our workforce and helping take care of us. This is one of the reasons why I started doing a lot of research within the realm of hardware and robotics, because I am very interested in helping us maintain our quality of life as we age.

Outside of technology, what type of hobbies do you enjoy?

I love sports. I really want to be able to do space travel even when I am older. Physical training helps me with my energy level and also makes me stronger. I am always finding ways to be more active, particularly with activities like snowboarding, running, hiking, cycling and even golfing.

Do you envision space travel as being like taking a vacation to another planet or to view the galaxy or more of a long-term living in space solution? 

At first I was really into the idea of living on other planets, but a lot of research I have come across has me thinking that humans are born to live on Earth. At least right now our technology is not advanced enough to allow us to live naturally on any reachable planets that I am aware of. In my life time, I would be interested in travelling to space and Earth will still be our home.

How does Intel help you succeed?

Intel supports me with my community endeavors by helping me organize events and workshops. I really appreciate companies like Intel who are willing to invest in the people doing incredible things. Additionally, on the personal level, Intel is an international brand and just by being recognized by Intel helps me in my career as a whole.

Want to learn more about the Intel® Software Innovator Program?

You can read about our innovator updates, get the full program overview, meet the innovators and learn more about innovator benefits. We also encourage you to check out Developer Mesh to learn more about the various projects that our community of innovators are working on.

Interested in more information? Contact  Wendy Boswell

Intel Showcases OpenStack Cloud Momentum at the Sydney Summit

$
0
0

Open Stack Sydney Banner 2017By Melissa Evers-Hood, Intel Open Source Technology Center

Last week, in Sydney, Australia, I attended my first OpenStack Summit. As a relative newcomer to this community, I felt compelled to share my thoughts and experiences from the event.

There is no question this is a dynamic community. OpenStack Foundation Executive Director Jonathan Bryce’s keynote helped set the right tone for the Summit, laying out an expanded charter for the OpenStack Foundation. This new direction incorporates important areas, such as container security and edge computing, to better serve the needs of the ecosystem. An open approach to infrastructure is a key focus for my team in the Intel Open Source Technology Center. I was very encouraged by the opportunities we have to work more closely with the OpenStack community on areas that are critical to our shared success.

Jonathan’s keynote also introduced the OpenLab, led by Huawei, Intel, VEXXHOST, and Open Telekom Cloud.  OpenLab is a place where members of the community can collaborate around integration and testing of open source cloud ecosystem tooling – such as Kubernetes, Terraform, software development kits (SDKs), and more – with OpenStack. I’m very excited that Intel is helping lead this important effort. We strongly believe OpenLab will be an asset to the community as we work together to create more complete open infrastructure solutions, rather than individual building blocks.

The Superuser Award is one of my favorite parts of the Summit, recognizing where community members and operators are doing great work on top of the OpenStack platform. I was thrilled to see the results being shared from the teams at Tencent, China Railway, China UnionPay, and City Network. Intel’s Open Source Technology Center supported many of the teams’ development efforts.  Tencent’s Tstack won the Superuser award which is pretty tremendous.  We were also lucky to be joined by finalist China Railway in a joint technical session sharing more details of their OpenStack implementation on an 800 physical node cluster of Intel® Xeon® processor-based servers.

There was a lot of interest across the community in advancing Open Stack development, as well as in Intel software such as the Data Plane Developer Kit, Intel® Data Center Manager, Enhanced Platform Awareness and more.   

In addition, I had the privilege of participating on a Women of OpenStack (WOO) panel about diversity and inclusion. The OpenStack Foundation actively cultivates diverse voices in the open source community, and that has resulted in continually increasing gender representation over the last several years. One highlight for me was the vibrant discussion in cultivating geographic, language and other types of diversity, and the out of the box thinking on language translation. It was gratifying to see how all of the community’s diversity is being embraced globally.  I was also proud to be one of Intel’s representatives at the Women of OpenStack Networking Lunch, where I was warmly welcomed and had several meaningful conversations with some amazing people.

Melissa Evers Hood October 2017

Thank you, OpenStack community, for putting on such a great event. I had a wonderful experience and look forward to continuing the conversation and momentum in 2018.  See you in Vancouver!

Melissa Evers-Hood is Senior Director of Cloud Technologies at Intel’s Open Source Technology Center, focused on the open source cloud software ecosystem

 

 

 

Trending on IoT: Our Most Popular Developer Stories for November

$
0
0

New IoT Developer Kits

Announcing Arduino Create* Support for Intel®-Based Platforms and the UP Squared* Grove* IoT Development Kit

Use this powerful combination of newly introduced hardware and software to assist you in building high-performance commercial IoT solutions.


Getting Started with Arduino* Create

Get Started with Arduino Create* for Intel-Based Platforms

Follow this guide to start programming your IoT projects using the Arduino Create* Web Editor.


MRA and UPM Libraries

MRAA and UPM Basics in Arduino Create*

Learn how to use MRAA and UPM libraries in Arduino* Create.


UP Squared* Grove* IoT Development

UP Squared* Grove* IoT Development Kit

This rapid prototyping development platform includes integrated software and end-to-end tools that will reduce development time for your intensive computing applications.


Intel Libraries

Using Intel® Libraries in Arduino Create*

This article shows what libraries are available inside of Arduino Create* for Intel®-based platforms running Linux*.


Getting Started with the Grove IoT Kit

UP Squared* Grove* IoT Development Kit Getting Started Guide

Step-by-step directions to connect your board to Arduino Create* and begin working on your commercial IoT solution.


Intel® Cyclone® 10 LP FPGA kit

Get Started: Intel® Cyclone® 10 LP FPGA Kit

Use this guide to get started with your Intel® Cyclone® 10 LP FPGA kit and set up your development environment.


Innovate FPGA Design Contest

Innovate FPGA Design Contest

You can change the world of embedded compute with your innovative FPGA design. Show your creativity and get a free development kit. Submit your design ideas by December 31st, 2017.


IoT Developer Show

IoT Developer Show: October 2017

In this episode, Intel® Evangelists specializing in IoT demonstrate how to collect real-time traffic data and send it to the GE Predix* cloud.


IoT DevFest

Watch the Intel® Global IoT Devfest Video Series

Industry leaders, innovators, and developers share development tips, tutorials, best practices, and discuss the future of IoT.


Intel® Developer Zone experts, Intel® Software Innovators, and Intel® Black Belt Software Developers contribute hundreds of helpful articles and blog posts every month. From code samples to how-to guides, we gather the most popular software developer stories in one place each month so you don’t miss a thing.  Miss last month?  Read it here. 

Intel IoT

Doctor Fortran in "Eighteen is the new Fifteen"

$
0
0

When the international Fortran standards committee (WG5) met in 2012 to set a schedule for the next standard revision, the informal name of the standard was set to be "Fortran 2015". The schedule changed over the subsequent years, but the name stayed the same. The current schedule has the standard being published in the second half of 2018, which caused a question to be asked: "Should we perhaps think of changing the year number to something more current?"

There is certainly precedent for this in Fortran. What ended up being called Fortran 90 was initially Fortran 83, then Fortran 88, then Fortran 8X. Fortran 2000 later became Fortran 2003 (published in 2004). Fortran 2008's name didn't change, but it was published in 2010. The committee's general rule for naming a revision was "the year in which the technical work was completed", but that hasn't really been the case for a while now, and a growing number of users and implementers raised the concern that Fortran, already laboring under a misperception that it is an ancient and obsolete language, would not be helped by having a name already three years out of date. If one looks to other languages, one sees year numbers that generally reflect the publication date (C++17, for example).

As part of their comments on the second Committee Draft (CD), the UK committee raised the question and requested that J3 (the US committee) discuss it, resulting in paper 17-193r1. J3, at the October 2017 meeting, held a "straw vote" to see what people were feeling, but the results were inconclusive. I then, in my new role as WG5 Convenor, polled the WG5 mailing list and solicited comments along with votes. The result of this poll was definitive, with 17 in favor of changing the name to "Fortran 2018" and 5 opposed. (You can read all of the comments in WG5 document N2144.)

As a result of this, the informal name of the in-development standard will become Fortran 2018. While recognizing that "Fortran 2015" has been in use for a while now, most of us feel that changing the name is a benefit in the long term. Also, as a side-effect of the discussion, the placeholder name for the standard after Fortran 2018 is changing to "Fortran 202x" from "Fortran 2020". It was never expected that the standard be published in 2020 - we figured 2021 at the earliest - but this keeps us from being painted into a corner again. (C++ did this as well, calling their in-development standard C++1x.) Maybe the one after 202x will be 202y?

Cash Recognition for the Visually Impaired Using Deep Learning

$
0
0

Cash Recognition for Visually Impaired is a project dedicated towards blind people living in Nepal. Unlike many countries Nepalese monetary notes do not have special marking in them to help blind people. I’ve wanted to address this problem for a very long time by coming up with a solution that is realistic in nature and easy to use in everyday life.

Previously, people have tried to solve this problem using techniques such as optical character recognition (OCR), image processing, etc. While they all seem promising, they could not perform well in real life scenarios. Using such techniques requires images of the currency notes with a very high resolution and proper lighting conditions, which we know from experience is not always possible.

If we imagine the use case, let’s say a blind person is travelling on a local bus or taxi and he requires to know what note he is giving to the driver. This requires him to ask what note he has right now in his hand and depend upon the honesty of the driver. Obviously, this approach has many flaws. It makes the blind person dependent on the response of the driver, which we know cannot always be trustworthy and he could easily fall victim to fraud in such a case. This not only applies to travel, but to the supermarket, retail stores, and any place monetary transactions occur.  

In order to solve this problem, we require a solution that will work anywhere and does not require perfect picture quality for classification of such monetary notes. This is where deep learning comes in.

The goal of this project is to come up with a solution via a smart phone app, both on Android and iOS, which will leverage the power of deep learning image classification behind the scenes and help the blind individual to recognize the monetary notes accurately and independently so he does not have to depend on others and can easily deal with monetary transactions on his own. The app will recognize the note he is carrying and play a sound in Nepali or English, depending upon the configuration, and let him know the value of the recognized note.

Approach

In order to make a solution that will work on realistic images taken from smartphone devices, we would require the deep neural networks to be trained on such images. So, to start I took pictures of the notes of two currency categories only: Rs.10 notes and Rs.20 notes. That way I would quickly know if the approach I am taking is right or not. I gathered about 200 pictures from each category.

After collecting the data of these two categories, now I would require a proper model to train these data on.

Transfer Learning

One of the most popular and useful techniques used today in deep learning is called transfer learning. Typically for training a deep neural network, we would require a large dataset of images for the task which I am currently implementing. But thanks to this technique we only require small number of datasets. What we do is take a model that is already trained on a huge dataset and leverage its learned weights to re-train on the small dataset that we have. That way we will not require large dataset and the model will predict correctly as well.

At this point I am only checking and verifying my approach, so I used VGG16 model pre-trained on ImageNet dataset with 1000 categories. I used a fine-tuning technique on VGG16 so that I only need to re-train the last layer of this model to get the desired accuracy.

I trained my fine-tuned model with following configurations for the first training:

I used Keras with Tensorflow backend for the code and ran the training.

After training the model the validation accuracy is about 97.5% whereas the training accuracy is 98.6%.

From this result, we know that my model is slightly over-fitted and in-order to minimize that I would require more data and might have to introduce regularizations in my model as well. Which will be my next steps.

The App

In order to interact with a prototype via RESTful API for now, I have developed an MVP version of the app using React Native. Some of the pictures of this app are pictured here:

For now, the two category version of the app will interact via REST API and compute on the server machine for the recognition and prediction part. The response via JSON format will be sent to the client app after preduction and the label display and sound play will be entirely done on client side of the app.

Next Steps

Now that I have successfully tested my approach and it holds feasible for the task I’m trying to solve, the next steps would be to collect data for all 7 categories of standard currency notes in Nepal.

For this project to be successful and effective I require at least 500 images for each category.

Further, after training the model I will have to create an app and embed the model offline so that a blind person using this app won’t have to connect to the internet every time they need to recognize the notes they are carrying.

Both of these tasks will require dedicated time and resources in order implement it properly. But after successfully testing my approach on the first two categories, I am hopeful that I can successfully solve this problem using deep learning. 

Winners Announced in the Modern Code Developer Challenge

$
0
0

CERN openlab and Intel are pleased to announce the winners of the Intel® Modern Code Developer Challenge! The announcement was made today at ‘SC17’, the International Conference for High Performance Computing, Networking, Storage, and Analysis, in Denver, Colorado, USA.

A challenge for CERN openlab summer students

Two winners were selected: Elena Orlova, for her work on improving particle collision simulation algorithms, and Konstantinos Kanellis, for his work on cloud-based biological simulations.

A Challenge for CERN Openlab Summer Students

CERN openlab is a unique public-private partnership between CERN and leading companies, helping accelerate development of the cutting-edge ICT solutions that make the laboratory’s ground-breaking physics research possible. Intel has been a partner in CERN openlab since 2001, when the collaboration was first established.

Each year, CERN openlab runs a highly competitive summer-student programme that sees 30-40 students from around the world come to CERN for nine weeks to do hands-on ICT projects involving the latest industry technologies.

This year, five students were selected to take part in the Intel® Modern Code Developer Challenge. This competition showcases the students’ blogs about their projects — all of which make use of Intel technologies or are connected to broader collaborative initiatives between Intel and CERN openlab. You can find additional information about these projects on a dedicated website that also features audio and video interviews.

“We are thrilled to support these students through the Modern Code Developer Challenge,” says Michelle Chuaprasert, Director, Developer Relations Division at Intel. “Intel's partnership with CERN openlab is part of our continued commitment to education and building the next generation of scientific coders that are using high-performance computing, artificial intelligence, and Internet-of-things (IoT) technologies to have a positive impact on people’s lives across the world.”

Selecting a Winner from Five Challenging Projects

The competition featured students working on exciting challenges within both high-energy physics and other research domains.

Selecting a winner from five challenging projects

At the start of the challenge, the plan was for a panel of judges to select just one of the five students as the winner and to invite said winner to present their work at the SC17 conference. However, owing to the high quality of the students’ work, the judges decided to select two winners, both of whom received full funding from Intel to travel to the USA and present their work. 

Smash-simulation Software

Smash-simulation softwareElena Orlova, a third-year student in applied mathematics from the Higher School of Economics in Moscow, Russia, was selected as one of the two winners. Her work focused on teaching algorithms to be faster at simulating particle-collision events.

Physicists widely use a software toolkit called GEANT4 to simulate what will happen when a particular kind of particle hits a particular kind of material in a particle detector. This toolkit is so popular that researchers use it in other fields to predict how particles will interact with other matter, such as in assessing radiation hazards in space, in commercial air travel, in medical imaging, and in optimizing scanning systems for cargo security.

An international team, led by researchers at CERN, is developing a new version of this simulation toolkit known as GeantV. This work is supported bya CERN openlab project with Intel on code modernization. GeantV will improve physics accuracy and boost performance on modern computing architectures.

The team behind GeantV is implementing a deep learning tool intended to make simulations faster. Orlova worked to write a flexible mini-application to help train the deep neural network on distributed computing systems.

“I’m really glad to have had this opportunity to work on a breakthrough project like this with such cool people,” says Orlova. “I’ve improved my skills, gained lots of new experience, and have explored new places and foods. I hope my work will be useful for further research.”

Cells In the Cloud

Cells in the cloud

Konstantinos Kanellis, a final-year undergraduate in the Department of Electrical and Computer Engineering at the University of Thessaly, Greece, is the other Modern Code Developer Challenge winner due to his work related to BioDynaMo. BioDynaMo is one of CERN openlab’s knowledge-sharing projects (another part of CERN openlab’s collaboration with Intel on code modernization). The project’s goal is to develop methods for ensuring that scientific software makes full use of the computing potential offered by today’s cutting-edge hardware technologies. This joint effort by CERN, Newcastle University, Innopolis University, and Kazan Federal University is to design and build a scalable and flexible platform for rapid simulation of biological tissue development.

The project focuses initially on the area of brain tissue simulation, drawing inspiration from existing, but low-performance, software frameworks. By using the code to simulate development of both normal and diseased brains, neuroscientists hope to learn more about the causes of — and identify potential treatments for — disorders such as epilepsy and schizophrenia.

Late 2015 and early 2016 saw algorithms already written in Java* code ported to C++. Once porting was completed, work began to optimise the code for modern computer processors and co-processors. However, to address ambitious research questions, more computational power was needed. Future work will attempt to adapt the code for high-performance computing resources over the cloud.

Kanellis’s work focused on adding network support for the single-node simulator and prototyping the computation management across many nodes. “Being a summer student at CERN was a rich and fulfilling experience. It was exciting to work on an interesting and ambitious project like BioDynaMo,” says Kanellis. “I feel honoured to have been chosen as a winner, and that I've managed to deliver something meaningful that can make an impact in the future.”

ICT Stars of the Future

ICT stars of the future

Alberto Di Meglio, head of CERN openlab, will present more details about these projects, as well as details about the entire competition, in a talk at SC17. The other three projects featured in the challenge focused on using machine learning techniques to better identify the particles produced by collision events, integrating IoT devices into the control systems for the LHC, and helping computers get better at recognising objects in satellite maps created by UNITAR, a UN agency hosted at CERN.

“Training the next generation of developers — the people who can produce the scientific code that makes world-leading research possible — is of paramount importance across all scientific fields,” says Meglio. “We’re pleased to partner with Intel on this important cause.”

For more information, please visit the Intel® Modern Code Developer Challenge website. Also, if you’re a student and are interested in joining next year’s CERN openlab Summer Student Programme, please visit the dedicated page on our website (applications will open in December).


Intel® Black Belt Software Developers, Intel® Software Innovators, & Intel® Student Ambassadors: November 2017

$
0
0

Intel® Developers and Innovators were busy over the last month! Here’s an update on what the Intel® Software Innovators, Intel® Black Belt Software Developers, and Intel® Student Ambassadors were up to around the globe.

BLACK BELT SPOTLIGHT

Marco Dal Pino worked the Ask the Expert booth showcasing demos and doing Q&A with attendees at the Windows Developer Conference, supported teams and individuals in the realization of its projects for Hack Developers Italia, hosted a session titled “Visual Studio for the Internet of Things” at Visual Studio Saturday 2017, and organized the dotNETconf Italy conference in Toscana and hosted a session on .NET Core & IoT.

Suresh Kumar Gunasekaran& George Christopher spoke at 50th Engineers Day in Trichy as well as spoke at A Road to Million Dollar Startup about entrepreneurship and initiative.

Martin Foertsch& Thomas Endres showcased & gave talks on their Avatar telepresence system using the Nao Robot, Oculus Rift, RealSense, and Intel IoT Gateway as well as Genuino technology at IMWorld, CodeMotion Berlin, and Oxford Science, Engineering and Technology Fair.

Andre Carlucci spoke about computer vision and the Intel RealSense camera at InterCon Sao Paulo. Gaston Hillar explained the advantages of Intel Distribution for Python for GIS Professionals at URISA GIS-Pro 2017. Abhishek Nandy gave an overview of Intel AI at a meetup in Kolkata and held Intel Commercial IoT Workshops at 4 events in India.

INNOVATOR SPOTLIGHT

ASIA PACIFIC

Adam Ardisasmita spoke at a Dine & Discuss at Game Dev Bandung discussing extended reality technology (VR, AR, MR) and did an AR/VR Workshop training people on how to use Unity, use the XR library, and ARCore at the Game Dev Bogor Meetup. Avirup Basu demoed and spoke at the day long Commercial IoT Workshop in Siliguri teaching students the use of AWS IoT using the Intel NUC and some sensors. Karthik MU spoke about Chatbots in IoT Industry Today at RVCE.

Benjamin Mathews Abraham gave a walkthrough on artificial intelligence concepts, starting from basic algorithms to advanced use cases, with technical focus on TensorFlow, SciKit, Intel Distribution for Python and Intel Movidius chips at IEEE All Kerala Computer Society Student Congress 2017. Manisha Biswas gave an overview of Intel IoT technology as well as how it integrates with Amazon Web Services at the Intel Alliance IoT Workshop for Women Techmakers Kolkata.

Mythili Vutukuru gave a talk about her NFV research at Intel India Research Colloqium. Omkar Khair spoke at the Intel AI Workshop at BITS Goa Campus. Prajyot Mainkar gave a talk and live demo on Intel IoT at the monthly meetup at RIT and also did an AI meetup session at the September meetup. Sourav Lahoti gave a webinar for the Intel IoT AWS alliance workshop. Pooja Baraskar hosted a small IoT meetup organized for developers in Chennai to discuss Commercial IoT. Pablo Farias Navarro’s Intro to VR Game Development2-hour course covers the development of a simple VR game with Unity had 102 new enrollments and his Virtual Reality Mini-Degree with 40 hours of content had 135 new enrollments.

UNITED STATES

Peter Ma’s latest project, Doctor Hazel, was covered as part of a Wall Street Journal article on AI. Peter also participated in the Money2020 Hackathon and the Singapore Airlines App Challenge. He also participated in the ATEC Developer Creativity Challenge where he built AR Pay, using AI to scan items and AR to display them for users to purchase through Alipay winning Best Creative Idea Award.

At Smart City Hack 2017 in Tempe, Arizona, Chris Matthieu took 3rd place for his proposal to connect all of the city’s smart meters to computes.io to create a supercomputer that could be used to cure diseases like cancer. Mohamed Elgendy taught a Machine Learning Nandodegree workshop at the Udacity hackathon. Jon Collins gave a State of the Industry talk on the current contenders in the AR marketplace, which also included promotion and reference to the Intel RealSense platform for windows based AR technologies at the Tacoma Game Dev Co-Op.

Alex Porter wrote blog posts about Underminer Studios involvement at both the Austin Unity Group Meetup and the AR/VR Tools & Tech Meetup in Austin. Alex also pitched for IDEGO at Tarmac TX event for Women in Tech for Good talking about their powerful VR tool for mental health (Alex begins speaking at 27:20 in the video). Both Alex & Tim Porter were on the panel for Startup Week exploring the mechanisms to engage and immerse users in virtual reality therapy with design, play, empathy, and biofeedback.

Geeta Chauhan gave a talk on Distributed Deep Learning Optimizations discussing optimizations for model training and inferencing, and many of the Intel AI related technologies at AI with the Best Conference. She also gave a talk on Distributed Deep Learning Optimization for Finance at the Global AI Conference in New York. Macy Kuang shared a video about the Android Things that Blink App.

Harsh Verma demonstrated PeopleSense Technology at the 27th Annual Fall Transportation Conference in Las Vegas. At the ACM Sacramento Chapter Talk on Data Science, Harsh and his colleagues continued their discussion on what are the implications for managers. Moheeb Zara launched a VUZE 360 4K VR camera on a nearspace balloon. They lost it, but documented the project and are still hoping someone finds it. He also is developing a robotics kit from scratch to teach robotics. He is working to reduce the cost at every point including using the Intel Movidius NCS, 3D printable wheels and no additional hardware needed the laser-cut body clicks together.

Nathan Greiner spoke as part of a panel on emerging technology in corporate real estate offering solutions and discussion on integrating virtual reality and IoT into the real estate workflow and software systems at Industrial Asset Management Council Fall Forum. He also had a booth to demo the VR green screen showing the transformative effect of an enterprise through the use of big data and advanced visualization. Ron Evans shared GoCV project which utilizes Computer Vision using Go and OpenCV 3 and the Intel CV SDK. Ron had the top article on Golang Weekly email newsletter as most influential email in Go community for the week of October 12 and October 26.

Rose Day held an IoT Workshop at Harvard and Harvard wrote an article about it, and the CT Tech Council website also reposted an Intel IDZ article featuring Rose and her work. Rose also discussed her current research, results, and future research into Environmental Analysis for Migraines using cloud computing technologies and iOS application in her West Hartford Toastmaster’s presentation. Shivaram Mysore spoke on NFV at the Google Plugfest and Faucet Conference in Berkeley.

Daniel Whitenack published a blog post walking through the implementation of a neural network in the Go programming language. He is also teaching an 8-week course on “Production Scale Big Data Implementation”, gave a talk on deploying AI to the edge titled “Honey I Shrunk the Neural Net” which he demoed object detection on a pocket chip using the Movidius NCS at All Things Open Conference. Daniel also presented on “reproducible data science in the cloud” at RPI Lally Invited Talk, and presented to researchers at the GE Global Research Center and the GE Go Users Group about running ML/AI workflows on modern infrastructure, and also gave a talk and workshop on applied ML/AI at the GDG DevFest in the Capital Region.

EUROPE

Fabrizio Lapiello began Internet of Things | Caserta, the first completely free course in Italy that allows you to train micro-controllers, electronics, embedded programming, APIs and other themes in the IoT field. Alejandro Alcalde shared his Scala-Category-Theory project on GitHub, wrote a post on Scala Category Theory Composition, began writing a series on Cryptography starting with Cryptopgraphy 101: Mathematical Basis, and also wrote about how he implemented his own related posts feature using sklearn, KMeans and TF-IDF for his blog. Eyal Gruss gave talks at Haifa Film Festival VR Conference, Reversim 2017, Nvidia GTC Israel, DataHack 2017, and KAS-YPFP seminar.

Gokula Krishnan Santhanam gave a talk on the basics of Machine Learning and about what can and cannot be done with the AI technology we have right now at the PyData Warsaw Meetup. Michael Schloh gave a hands-on workshop on using embedded devices and serial analysis hardware at CCC Datenspuren 2017 and an IoT Workshop teaching NodeJS and Python development on Wind River Pulsar Linux in Minnowboard Turbots. Johnny Chan made TensorFlow GitHub Contributions by adding code samples to Official TensorFlow documentation and patching the TensorFlow Serving bazel build process by updating underlying source codes.

Justin Shenk spoke at PyData Warsaw on how to visualize neural network activity and parameters, or in other words, “breaking the black box of deep learning”. Kosta Popov was interviewed for a blog post about him and his Cappasity: 3D Scanning Technology Platform. Pascal van Kooten presented “Automated Machine Learning” with the self-created library ‘xtoy’ at PyData Trojmiasto. Liang Wang promoted his open-source project in numerical computing, Owl library in OCaml, at the 26th ACM Symposium on Operating System Principles. Mohammed Shojafar shared his Superfluidity NFV project on his site. Silviu-Tudor Serban was on a panel tech talk and gave a presentation on his projects at Makers United Bucharest.

Salvino Fidacaro organized the Google DevFest Mediterranean conference in Italy and gave the introductory keynote talking about the future of technologies and in particular AR. Matteo Valoriani showcased AR/VR applications for healthcare demoing his “Virtual Patient” project at SIDO International Congress. He also demoed and showcased applications for healthcare, in particular, mixed reality solutions and face reconstruction at Smau Milano. Matteo also spoke about the use of AR technologies to improve interaction between patient and doctors and device labs at Aria CAD/CAM meeting.

SOUTH AMERICA

Pedro Kayatt had a booth showing both 7VRWonders and Minecraft VR for middle school students showing how VR can improve education at a Mostra Porto Workshop. He also gave a talk explaining the differences between AR, MR, and VR and how to start developing for VR at FATEC SCS Games Day. Pedro also gave an introduction about VR, MR, and how AI can blend with this to improve content at InterCon 2017. At imaster InterCon Pedro talked about how to implement the basics for VR in Unreal Engine 4. Also, at TEDxUSP Pedro talked about the inspiration road that took him to fund his own company and accomplish the dream of his Dinos do Brasil project. Marcelo Quinta gave a talk on software design to interaction through voice at Devfest Cerrado.

STUDENT AMBASSADOR SPOTLIGHT

Yash Akhauri and Vidhi Jain gave talks on how Intel Student Ambassadors on campus are integrating Intel technologies into their deep learning projects at BITS Pilani. Yash also published the week 4 update and week 7 update for his Art’Em project. Vidhi Jain shared two projects, Project Momental which is to monitor mental well-being and symptoms of bad mood, anxiety, or depression and Voice Profiling, which aims to evaluate whether we can predict the parameters like gender, age, background, and more for each individual speaker.

Rafael Santos gave an overview of deep learning, applications, and the present and future of software and hardware for AI at Semana de Tecnologica FATEC Rubens Lara. Avinash Madasu posted two projects on GitHub, a movie recommender system and a machine learning specialization course. Bruna Pearson shared her background and projects in an interview AI Student Ambassador Bruna Pearson: Deep Learning, Robots, and Drones Oh My and also shared her latest project, Autonomous UAV Control and Mapping in Cluttered Outdoor Environments, featuring the Intel Movidius NCS.

Chinmay Yalameli created the Intel Software Student’s Group GIT which will meet weekly at his institution. Chris Barsolai wrote an article listing the comprehensive, broad portfolio Intel offers to developers in the AI space in its bid to democratize AI. Daniel Theisges shared his RH Analytics project on GitHub and wrote an article on the Analysis of the Probability of Employee Attrition using Logistic Regression in TensorFlow. David Ojika presented the use of FPGAs for student’s class projects in deep learning at the University of Florida.

Eren Halici gave a hands-on tutorial about Deep Learning at Middle East Technical University. Edwin Williams used Intel computing resources to simulate the actions of vessels in a congested bay area for his Situation Awareness and Collision Avoidance project. Christian Gabor ran a meeting for OSU students to get involved in machine learning, demonstrating the resources of the Intel Nervana AI Academy. Kerem Kurban tests intel's python3 distribution in his GitHub project Naïve Bayesian Estimation for Car Dataset.

Kshitiz Rimal gave a talk on how Intel is helping students and developers in the field of AI and deep learning by providing its tools and software at the Artificial Intelligence & Its Ecosystem Conference. Nikhil Murthy presented at the Student Ambassador Forum in at the Deep Learning Summit in Montreal, where he discussed his research and projects and talked about his experiences in the AI space. Muhammed Zahit Karasam gave a talk about Intel and artificial intelligence at the MUFE Robotics club at Marmara University.

Kaustav Tamuly posted his Tic Tac Toe game on GitHub that teaches agents to play the classic game without using any fancy libraries. He also spoke at the Intel AI Workshop on BITS Goa Campus. Rouzbeh Shirvani shared his real time object classifier with Movidius NCS in live video streams. Sachin Dadaso Waghmode gave a talk at the In Persistent Memory Computing Summit in San Francisco on high-capacity persistent memory with Java. Salil Gautam shared web app for image classification using convolutional neural network project.

Prajjwal Bhargava shared his Natural Language Processing with Small Feed-Forward Networks project, as well as his Natural Language Processing Sentiment Analysis project on GitHub. Prajjwal also created a YouTube channel to make AI more accessible which already has gained over 700 subscribers and 4000 views in the first 20 days. Shubham Jaiswal shared his project D-fast, an assistive chat bot for courier services. Sri Harsha Gajavalli spoke at the AI Workshop at IIITS College covering the introduction of AI, ML, DL, and Intel Architecture to AI, Intel DAAL, Intel MKL, MKL-DNN and provided hands-on experience with Intel Parallel Studio.

Shaury Baranwal shared his YelpCamp project which is an e-commerce platform for selling and buying things made on MEN stack which he wants to use WebVR to improve the user experience. Soubhik Das had a research paper on supervised machine learning in intelligent character recognition of handwritten and printed nameplates was accepted at the IEEE International Conference (ICAC3’17). Tejeswar Tadi published an article on Recursive Neural Tensor Networks and their utility in gauging trader sentiment for cryptocurrencies.

Devinder Kumar gave a talk on “Explaining the Unexplained: Peering into the Minds of AI” at the Toronto Machine Learning Summit. Vaibhav Amit Patel’s paper on a Generative Adversarial Network for Tone Mapping HDR images was accepted at the Conference NCVPRIPG 2017 in Mandi, India. N. Aravindhan spoke about artificial intelligence with Arduino and how the Internet of Things (IoT) connects people, machines, and applications in order to enable bi-directional flow of information and enable people’s real-time decisions. Peter Szinger published a blog post of an introduction to clustering and the K-means algorithm. Rashik Kotwal posted a video with a demo of GUI developed in Python for the ongoing research project at AMIIL. 

Want to learn more?

You can read about our innovator updates, get the full Innovator program overview, meet the innovators and learn more about innovator benefits. We also encourage you to learn more about our Black Belt Software Developer program as well as our Student Ambassador program. Also check out Developer Mesh to learn more about the various projects that our community of innovators are working on.

Interested in more information? Contact  Wendy Boswell on Twitter.

Intel® Developer Mesh: Editor’s Picks November 2017

$
0
0

Every month I pick out 5 projects from Intel® Developer Mesh that I find interesting and share them with you. There is a diverse array of projects on the site, so narrowing it down to just five can be difficult! I hope you’ll take a few minutes to find out why each of these projects caught my eye and then hop over to mesh to see what other projects interest you.

CiapoFoto Booth: A Great Addition to Your Next Party

CiapoFoto is a transportable photo booth to set up at parties and events, but what sets it apart from regular photo booths is that with the help of an Intel RealSense™ camera places you and your friends into your preferred background catapulting you into fun scenarios. Imagine pictures with your wedding party being chased by the Stay Puft Marshmallow Man or dinosaurs! Intel® Software Innovator Michele Tameni plans to build an app as well as a standalone booth for this project and says that pictures can easily be shared or even projected on  wall during the party, and using facial recognition the booth can collate your photos and send them right to your inbox – making this quite an interactive party game!

RoboJackets’ Intelligent Ground Vehicle Competition Bot

Intel® Software Innovator Daniil Budanov’s team, https://devmesh.intel.com/projects/robojackets-intelligent-ground-vehicle-competition-neural-networkRoboJackets, annually compete in the intelligent Ground Vehicle Competition which revolves around autonomously navigating an outdoor obstacle course. Due to the noisy and inconsistent nature of the competition environment, they are proposing to move their lane detection algorithms to a convolutional neural network using the Intel® Movidius™ Neural Compute Stick (NCS) to infer boundaries based on segmentation and classification of drivable space. A robot obstacle course sounds incredibly fun and I think the RoboJackets are headed in the right direction to get an edge over the competition by using the low powered NCS.

The Electronic Curator: Creating Vegetable Face Artwork and Curating It

The Electronic Curator is a generative adversarial network that creates vegetable face artwork and curates it. Intel® Software Innovator Eyal Gruss’ project examines whether a computer can not only generate art, but also evaluate its quality. The painter and curator are both neural networks, fed with examples of muses and portraits, and then generalized using Generative Adversarial Networks. The system not only repetitively attempts to improve the painting, but also tries to understand it and generate a text description for it, and eventually grades it. I really enjoyed watching the video of the live demo of the Electronic Curator that Eyal has linked in the project; watching the painter adjust and improve based on the expressions of the live muse was really cool to see.

 

Local Air Quality Forecast: Deep Casts Powered by NCS

Asthma affects a large number of people worldwide and can have quite adverse effects on their quality of life. With rising air pollution, volcanic ash, pollen levels, wildfire smoke, and more contributing to poor air quality there isn’t really a good forecasting tool available. While current, “now casts”, can tell you to avoid going outside right now because of the air quality, they can are often inaccurate because they are based on a combination of historical data as well as the nearest station data, which could still be quite a ways away. Intel® Student Ambassador Carlos Paradis’project intends to use the “now cast” data along with local air quality sensors, and a Raspberry* Pi to not only have a better idea of current air quality for a particular micro-climate, but to also infer the data and forecast the air quality more accurately using an Intel® Movidius™ NCS.

Monitoring Tea Plantation Sites with Autonomous Drones

India is one of the largest tea producers in the world. Intel® Black Software Developer Abhishek Nandy wants to use drones to search out places for future tea plantations as well as to autonomously monitor tea plant health on current plantations. The drone will fly over the plantation in a specific grid and take scans with an infrared camera attached to a Raspberry Pi3. An Intel® Movidius™ NCS will also be attached to the Raspberry* Pi3 and will process the images against the trained model to determine the health of the tea plantation site. This smart setup using artificial intelligence will make monitoring current tea plantations as well as finding new sites much easier.

Become a Member

Interested in getting your project featured? Join us at Intel® Developer Mesh today and become a member of our amazing community of developers.

If you want to know more about Intel® Developer Mesh or the Intel® Software Innovator Program, contact Wendy Boswell.

Using Intel® Threading Building Blocks in Universal Windows Platform applications

$
0
0

The Intel® Threading Building Blocks (Intel® TBB) library provides a set of algorithms to enable parallelism in C++ applications. It is highly portable and supports multiple platforms, including the full spectrum of Windows* devices based on Intel® architecture.

The Intel TBB 2018 release added support for the Universal Windows Platform (UWP) – an application platform for the Windows 10 ecosystem that allows developers to create and run apps on all kinds of Windows devices - PC, Tablet, Phone, Xbox*, HoloLens*, Surface Hub*, and even IoT.

Universal Windows Platform ecosystem

The wide reach of UWP has its consequences: there are restrictions against API that can violate the security of the platform. Microsoft’s Visual C++* compiler has the /ZW option that can be used to detect such API calls; however, to make sure your app is fully UWP-compliant, you have to check the whole package with the Windows App Certification Kit.

Getting Started with Developing UWP Applications Using Intel® Threading Building Blocks

The tutorials Windows* 8 Tutorial: Writing a Multithreaded Application for the Windows Store* using Intel® Threading Building Blocks and Windows* 8 OS Tutorial: Writing a Multithreaded Application for the Windows Store* using Intel® Threading Building Blocks - now with DLLs are fully applicable to UWP application development with just a couple of small changes:

  • You have to use the Blank App (Universal Windows) Microsoft Visual Studio* 2015/2017 template instead of Blank App (Windows Store).
  • Prebuilt UWP-compliant binaries of the Intel TBB library are available as part of our commercial and open-source distributions. You can find them in the following locations:
    • In the open-source distribution: <distribution_root>\lib\<target_architecture>\vc14_uwp
    • In the commercial distribution: <suite_install_dir>\compilers_and_libraries_<version>\windows\tbb\lib\<target_architecture>\vc14_uwp, where <suite_install_dir> is the installation directory of the software suite Intel TBB came with (by default, C:\Program Files (x86)\IntelSWTools\)

Note that the Universal Windows Driver platform is also supported. You can find the binaries for it in the folder vc14_uwd available in the commercial distribution of Intel TBB. Also, binaries for the Windows Runtime (WinRT) are still available in the folder vc12_ui.

If you prefer to build the binaries from source, see the section Build Intel® Threading Building Blocks from Source.

The steps to package Intel TBB inside a UWP app are largely the same as for Windows 8 applications:

  1. In your Microsoft Visual Studio Project Properties > Linker options, set Additional Dependencies to tbb.dll and Additional Library Directories to the Intel TBB folder with the binaries.
  2. Right-click your project root and select Add > Existing item.... Select the tbb.dll file to add it to your UWP project dependencies.
  3. Right-click tbb.dll in your project tree and select Properties. Set Content to True to mark the file as content. This will include the Intel TBB library inside your UWP app package.
  4. Right-click your project root and select Store > Create App Packages... to create a Windows Store Application package.

Finally, the last step is to launch Windows App Certification Kit and check that application successfully passes validation. The main thing to watch out for is the Supported API test, which checks that your application calls no restricted Windows Store* API:

Windows App Certification Kit report

Congratulations, you’re done! Your Universal Windows Platform application can use Intel TBB and benefit from parallelism on the wide range of Windows devices based on Intel architecture.

Build Intel® Threading Building Blocks from Source

Universal Windows Platform Binary

To build UWP-compliant Intel TBB from source, you have to install:

  1. GNU* Make
  2. Microsoft Visual Studio 2015/2017
  3. Windows 10 SDK

To prepare the environment:

  1. Launch the Microsoft Visual Studio developer command prompt
  2. From within the developer command prompt, locate the script vcvarsall.bat in your Microsoft Visual Studio installation and launch it with the options vcvarsall.bat <arch> store, where <arch> is the target architecture. This step configures the environment for linking to UWP libraries.

From the same developer command prompt, run the following:

gmake tbb tbbmalloc target_app=uwp target_mode=store

Let’s take a look at the options:

  • target_app=uwp configures Intel TBB to link to OneCore.lib and to use only the set of APIs that is allowed for a Universal Windows Platform application.
  • target_mode=store makes sure that tbb.dll is configured to be run in an app container (a requirement for Windows Store UWP packages).

Universal Windows Driver Binary

To build UWD-compatible binaries, you have to install the same set of software as for UWP-compliant Intel TBB plus the Windows Driver Kit.

Launch the Microsoft Visual Studio developer command prompt. You don't have to launch vcvarsall.bat with the store argument for UWD binaries. From the command prompt, run the following:

gmake tbb tbbmalloc target_app=uwd

  • The target_mode option is not required because a Universal Windows Driver does not need to be run in the restricted UWP application environment.
  • target_app=uwd configures Intel TBB to link to OneCore.lib dynamically and to the C Run-Time Libraries (CRT) statically. Linking to the CRT is necessary for Universal Windows Drivers.

Note that you have to verify that your driver uses only Universal Device Driver Interfaces (DDIs) with the ApiValidator application that comes with the Windows Driver Kit.

Intel® AI Academy: The Future of AI. For All.

$
0
0

Intel AI Academy

The inverted A symbol (∀) is the universal quantifier from predicate logic. It means that a stated assertion holds "for all” instances of the given variable.

The AI developer program team at Intel believes “∀” is the perfect symbol to represent the new Intel® AI Academy. The AI Academy is intended for all developers, data scientists, students and educators who are creating the future of AI. For All.

The Intel® AI Academy is a membership program designed to give developers, data scientists, students and educators the tools they need to shape the future of AI. Members can stay on top of the latest developments in the AI space with learning materials and tools, run their own solutions using Intel cloud technology through the Intel® AI DevCloud, and get feedback and support from peers and experts on their AI projects, and much more.

Intel® AI Academy: Learn, Develop, Share & Teach

The AI Academy offers members a multitude of benefits that fall into the primary categories of learning, developing, sharing and teaching.

AI Academy Member BenefitsLearn– Build your knowledge, sharpen your skills, and get recognized through expert-led training and curated learning paths designed for beginners & experts alike

Develop– Leverage Intel-optimized frameworks, tools and libraries to help you easily deploy AI solutions to solve complex problems.

Share– Keep up to date with the latest AI news, collaborate with experts and peers, and receive support & feedback for your AI projects

Teach– Enhance AI learning with comprehensive courseware, hands-on exercises, and faculty support to develop your student curriculum.

For those who are just interested in exploring the AI Academy prior to becoming a member, there are plenty of basic benefits to be enjoyed with just a visit to the Intel® AI Academy.

Intel® AI Academy: Member Benefits

Whether you’re just starting out, or already an expert, the Intel® AI Academy provides the requisite benefits to help members understand, design, develop, optimize, deploy and teach the future of AI.  To access the full range of benefits, become a member of the Intel® AI Academy by joining now.

Intel® AI Academy for Students

Start seeking tomorrow’s breakthroughs today with the Intel® AI Academy for Students. Drive your own AI journey using curated student kits, learning materials, and university events to help you get started. Visit software.intel.com/ai/student-ambassador to learn more about the benefits of the program.

Intel® AI Academy for Professors

Inspire your students to seek tomorrow’s breakthroughs today with the Intel® AI Academy for Professors. Enjoy comprehensive courseware, hands-on exercises and answer keys for the classroom. Visit software.intel.com/ai-academy/professors to learn more about the benefits of the program.

 

Autodesk University Shows How Intel Technology Powers 3D Design & Engineering Software

$
0
0

Intel showcases virtual reality at Autodesk University 2017

At the Autodesk University event in Las Vegas, November 14-16, civil and commercial/industrial designers and manufacturers who use Autodesk software came together to see The Future of Making Things. These skilled professionals are described as the people who “design and make the world around us,” and at this event, they got an up-close look at how Intel® architecture (IA) is boosting performance, especially in two rapidly expanding fields: commercial virtual reality (VR) and generative design.

VR and Generative Design Offer Unlimited Design Possibilities

Commercial VR is on the rise, and it’s no wonder. According to Tech Pro Research, 47% of businesses are considering VR for the future, including everything from virtual product demonstrations to training and prototyping. Equally exciting is generative design, which uses artificial intelligence to explore all permutations for you. That means you can come up with optimized design possibilities in an automated way. It’s sort of like the way evolution works—over time, you get millions of different options for an eye, a wing, or a webbed foot. Only instead of taking thousands of years, it happens in an instant.

Using hands-on training sessions and exhibition demos, Intel and Autodesk showcased amazing design capabilities on everything from rich clients to cloud-based servers, end to end. Performance optimizations and new features were shown for VR, machine learning, graphics and simulation, demonstrating the powerful combination of technology and creativity for design and manufacturing.

At the Technology Trends executive panel, Kumar Chinnaswamy, Intel Client Computing Group Head of Commercial AR/VR Solutions, joined executives from Autodesk, Dell, Frame, HP, Lenovo and NVidia to share thoughts on the “Future of Making Things” with more than 1000 CAD/BIM/IT managers and Autodesk early-adopter power users from all industries.

Intel's Kumar Chinnaswamy speaking at Autodesk University 2017

Kumar’s talk focused on the Intel-powered AR/VR/MR and AI Revolution where immersive visualization, generative design and deep learning are creating a rare opportunity for creators, designers and innovators to disrupt both tech and non-tech industries.

Intel’s Debra Goss-seeger hosted a panel on Generative Design.There were over 125 in attendance.

Generative Design Panel at Autodesk University 2017

The standing-room-only heard industry experts discuss how cloud computing makes it possible for engineering teams to develop and explore the full design space for any problem that they may want to explore.

Intel Processing Power Enables Enhanced Capabilities for Autodesk Software

Autodesk U exhibits displayed inspiring VR capabilities on Intel-based systems running LIVE Design*, Arnold*, 3ds MAX*, Maya*, and ReCap* software. Attendees also witnessed impressive scalability of several Autodesk appications on new, many-core Intel® processors. One example is Arnold 5 rendering software, which runs:

  • 6.4X faster on a system with two Intel® Xeon® Platinum 8180 processors, for servers, with 56 cores compared to an Intel Xeon processor E5-1680 v41 with eight cores 
  • 2.3X faster on an Intel Xeon processor W-2195 with 18 cores, for workstations, compared to that same eight-core Intel Xeon E5-1680 v41

That kind of speed is critical for meeting the demands of modern animation and high-end visual effects (VFX) production. Attendees also had the chance to see Autodesk ReCap Photo creating 3D models faster on higher frequency Intel Xeon processors: the software runs 1.83X faster on an Intel Xeon processor W-2145 compared to an Intel Xeon processor E5-2697 v32

ReCap Photo is a new cloud-connected solution tailored for drone-based image capture. Available with a ReCap Pro subscription, ReCap Photo cloud processing uses multicore Intel Xeon processors. The photogrammetry process of the ReCap Photo software happens in the cloud, but other features—such as 3D model visualization, editing, and model exporting to other Autodesk solutions like Revit, InfraWorks, or Civil 3D—are performed locally on the user’s workstation, where Intel Xeon processors enable project teams to better explore and communicate their design visions.

Tech Talks in the Intel Booth at Autodesk University 2017

At the Autodesk University event, attendees also heard a tech talk about the advantages of the Intel® Xeon Phi™ processor bootable host. It delivers massive parallelism and vectorization to support the most demanding Machine Learning design paradigms for AutoCAD* Netfabb* additive manufacturing and design software and the Dreamcatcher generative design system.

For mainstream computer-aided design (CAD), Intel® HD Graphics and Intel® Iris™ Pro Graphics technologies were demonstrated with Intel® Xeon® processor E3-based systems. These Intel® graphics technologies have been supported by AutoCAD software since 2014, and the latest versions of the Intel graphics certified software and validated workstations delivered better than ever performance and resolution for remote users of cloud-based software. 

In addition, performance optimization for Intel® Xeon® Scalable processors was demonstrated for mainstream design tools including AutoCAD, Revit, and VRED as well as for simulation tools like Autodesk Fusion*, Nastran* and Autodesk explicit simulators. That means more designers and engineers using these tools across multiple industries will recognize the benefits of having Intel Inside®.

The possibilities are endless for Autodesk-based commercial VR and generative design on IA workstations and cloud environments. Read the Intel solution brief for more information on the ultimate visualization performance powered by Intel Xeon Scalable processors and Intel Xeon W processors.

About Tim Allen

Tim Allen, Intel Global ISV Alliance ManagerTim is a strategic marketing manager for Intel with responsibilities for cloud, big data, analytics, datacenter appliances and RISC migration. Tim has 20+ years of industry experience including work as a systems analyst, developer, system administrator, enterprise systems trainer, and marketing program manager. Prior to Intel, Tim worked at Tektronix, IBM, Intersolv, Sequent, and Con-Way Logistics. Tim holds a BSEE in computer engineering from BYU, PMP certification, and a MBA in finance from the University of Portland. Follow Tim and the growing #TechTim community on Twitter: @TimIntel.

View all posts by Tim Allen.

 

 


Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark* and MobileMark*, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products.

1 Configurations: Based on internal testing at Intel that rendered the same workload in Arnold 5 on a system running two Intel Xeon Platinum 8180 processors (2.5/3.8 GHz, 56 cores, 112 threads) with 64 GB DDR4 memory and a PCIe* solid-state drive (SSD), a system running an Intel Xeon processor W-2195 (2.3/4.3 GHz, 18 cores, 36 threads) with 64 GB DDR4 memory and an Intel® Optane™ SSD 900P Series drive, and a system running an Intel Xeon processor E5-1680 v4 (3.4/4.0 GHz, 8 cores, 16 threads) with 64 GB DDR4 memory and a SATA SSD.


2 Configurations: Based on internal testing at Intel that rendered the same workload in ReCap running CPU I/O profile (open, decimate, save) with a large dataset (Vertex: 47 million, Faces: 93 million, 2.16GB) on a system running an Intel Xeon W 2145 processor (3.7/4.5 GHz, 8 cores, 16 threads) with a PCIe* solid-state drive (SSD), and a system running  two Intel Xeon processors E5-2697 v3 (2.6/3.6 GHz, 28 cores, 56 threads) with a 1.2TB Intel® SSD 750 Series drive with NVMe*.

 

 

Using Whitelists to Improve Firmware Security

$
0
0

Firmware has become more popular in the world of computer security research. Attacks operating at the firmware level can be difficult to discover, and have the potential to persist even in bare-metal recovery scenarios. This type of hack has been well documented by investigations of the HackingTeam and Vault7 exploits.

Fortunately, there are methods for detecting and defending against such attacks. Firmware-based attacks typically attempt to add or modify system firmware modules stored in NVRAM. Tools provided by the open source CHIPSEC project can be used to generate and verify hashes of these modules, so users can detect unauthorized changes.

CHIPSECCHISPEC, introduced in March 2014, is a framework for analyzing platform level security of hardware, devices, system firmware, low-level protection mechanisms, and the configuration of various platform components. It contains a set of modules, including simple tests for hardware protections and correct configuration, tests for vulnerabilities in firmware and platform components, security assessment and fuzzing tools for various platform devices and interfaces, and tools acquiring critical firmware and device artifacts.

The whitelist module (tools.uefi.whitelist) uses CHIPSEC to extract a list of EFI executables from a binary firmware image, and builds a list of “expected” executables and corresponding hashes (.JSON file) to be used later for comparison. Documentation is available as part of the module source code on github, and in the CHIPSEC manual.

The process assumes you’re starting from a “known good firmware image”, preferably a production version of the firmware provided by the manufacturer, and that you’ve used CHIPSEC to scan for known issues before accidentally whitelisting something nasty. This example uses the open source UEFI firmware for the MinnowBoard Turbot (release 0.97), for better visibility into the process.

Generating a Firmware Whitelist

The following CHIPSEC command line creates a whitelist named efilist.json based on a “known good image” (platform_fw.bin):

python chipsec_main.py -i -n -m tools.uefi.whitelist -a generate,efilist.json,platform_fw.bin

CHIPSEC in Ubuntu Linux

Applying this process to the x64 “release” and “debug” images of the MinnowBoard Turbot 0.97 firmware produces the following whitelists:

  • MNW2MAX1.X64.0097.D01.1709211100.bin (UEFI x64 firmware, debug mode) - X64D97_whitelist.json
  • MNW2MAX1.X64.0097.R01.1709211052.bin (UEFI x64 firmware, release mode) - X64R97_whitelist.json

(exercise for the reader: download the firmware images, run CHIPSEC, see if your whitelist matches)

It’s important to note that resulting JSON whitelist file isn’t a just hash of the binary file, it contains hashes of the firmware image’s executable modules. CHIPSEC has the ability to scan and catalog individual firmware components, which can be used to whitelist or blacklist specific executables.

Posting Whitelists

If you were a system manufacturer, you could publish these whitelists for customer verification. However, it’s best to sign files for verification purposes. To assure that the whitelist obtained by an end-user is the official whitelist, it can be signed with GPG using a detached signature:

gpg –detach-sign <file>

Additionally, users should authenticate certificates for a well-known domain name for the system manufacturer when delivered over HTTPS. It’s also advisable for IT organizations to sign internally generated whitelists to prevent unauthorized lists from being used in audits.

Checking Whitelists

If a detached signature is available, GPG can be used to verify the whitelist file:

gpg –verify <file>

Once the authenticity of the whitelist has been verified, the user needs to take an image of the platform firmware using CHIPSEC. This command line used the ‘dump’ command to generate user_fw.bin:

python chipsec_util.py spi dump user_fw.bin

Now CHIPSEC can be used to verify the platform firmware image against the associated whitelist. This example would verify against the x64 “release” image for Minnowboard Turbot 0.97:

python chipsec_main.py -i -n -m tools.uefi.whitelist -a check,X64REL_whitelist.json,user_fw.bin

At this point, CHIPSEC will return a “PASSED” or “WARNING” status. If the test returns “PASSED”, all hashes were found in the whitelist and they match expected values. If the test returns “WARNING”, was then something is different. Note that a “WARNING” against doesn’t immediately indicate a security threat (ex: user may have updated to the manufacturer’s latest firmware), but it does require investigation to make sure the altered firmware image isn’t due to an attempted attack.

Limitations

Using tools like CHIPSEC this can improve security but no system can be absolutely secure, and these procedures cannot defend against all possible attacks. There are a number of limitations to consider when interpreting whitelist results:

  • There may be customized or proprietary image packaging methods that CHIPSEC does not understand. This could result in some modules being excluded, so corresponding changes may be missed during a whitelist comparison.
  • Software running on an already-compromised system can be fooled. If a system is already compromised, CHIPSEC and similar software may not produce reliable indicators.

Summary

The ability for CHIPSEC to generate whitelists provides a method for validating the firmware supply chain. Manufacturers can leverage these tools to help users verify official firmware releases, and IT administrators can generate whitelists for internal audits based on trusted images.

References

CHIPSEC: https://github.com/chipsec/chipsec& https://twitter.com/CHIPSEC

MinnowBoard: https://firmware.intel.com/projects/minnowboard-max& https://twitter.com/minnowboard

Thanks to John Loucaides for providing CHIPSEC whitelist examples and JSON sample files.

Using Wind River® Simics® to Inspire Teachers in Costa Rica

$
0
0

Teaching the Teachers Fernando In Front of Blackboard
Fernando Molina, Intel Costa Rica intern

It is often said that you do not know something for real until you have taught it to someone else. Recently, I had the delightful experience of working through such an experience with Fernando Molina, an intern at Intel in Costa Rica. We worked together to create a workshop for university teachers and researchers to show them how they can use Wind River® Simics® tools and virtual platforms at universities and other institutions of higher education.  Fernando ended up with one of his university teachers in the classroom – teaching his own teacher!  During the process of developing the workshop, Fernando also had to learn Simics and apply it to the task of developing a Linux* device driver and associated test programs. Lots of learning on all sides. 

The workshop was based on a complete system stack simulated in Simics: Generic Intel® PC-style system model, PCI-express* (PCIe) device, device driver, Linux kernel PCIe system, and application-level software using the device through several Linux operating system (OS) application programming interfaces (APIs). We created a simulated PCIe device that featured buttons for input and a few light-emitting diodes (LEDs) for output. In the real world, this would have been something like a panel attached to a PCIe slot in a PC - not a very realistic device in terms of functionality, but a great basis for explaining how devices and device drivers work. 

The setup is shown in figure below:Teaching the Teacher Simics Setup

JE: Fernando, thank you for joining me for this interview. Could you please start by introducing yourself to our readers?

FM: I am Fernando Molina, from Costa Rica. I’m a Computer Science student at the Costa Rica Institute of Technology (also known as Tecnológico de Costa Rica, TEC) on my last semester. I started working at Intel in August 2016 as an Intern. This is my first job and I’m very happy to start at a place like Intel. 

JE: What did you do in this project?

FM: I supported the idea to have a workshop to show Simics capabilities to academia in my country. Simics is a powerful tool that can be used as a learning aid in classrooms, and the idea was to create a setup that could explain teachers through example what Simics is and what it is capable of doing. During this project, I had to work with the Linux kernel to create some procedures in a device driver for a small LED board that was connected to the simulation through PCIe. This was a challenge for me, because of my background I had never worked with Linux in such a low level, but the Simics tools also helped me to get into the right track fast. I mainly worked using C to write the driver and then Simics to test and debug it. 

JE: What did you learn about Simics?

FM: I learned it is an incredible tool for software development, especially for low level software. Simics provides a lot of information that can be hard to get using normal software development tools or normal hardware debugging approaches. Changing the code, building and testing a new version didn’t take more than a minute, which made the development really fast. 

JE: How did that work?

FM: The magic of Simics automation. Using Simics, it’s a matter of seconds rebuilding the driver code, loading a previous checkpoint with the machine booted and install the new kernel module from this, without having to wait for the platform to boot from scratch! This made development very fast, just a matter of recompiling and launching the new Simics from the checkpoint. 

Here is the part of the script that does the loading and installing the kernel modulo ($ insmod) of the kernel driver (after we started from the checkpoint):

Teaching the Teacher Auto Loading coding

JE: In the driver updates, how did you deal with problems that arose?

FM: The Simics Eclipse tools help for debugging and pinpoint errors more easily. You can even debug Kernel code if you like! I found out Simics is a perfect tool to develop your software for device or platforms that are not physically available, and even if they were, development (and specially debugging) using Simics is much more convenient. Repeating problems was very easy. 

JE: What did you learn about PCIe and Linux?

FM: I learned a lot about PCIe, given that I had no prior experience with it. PCIe and Linux work nicely together, the Linux kernel maps the device automatically to an available address and it then can be accessed through file handlers or direct memory accesses. Your help was appreciated in this topic. 

Here is an example of the code of the device, showing the start of the PCIe configuration space of the model. This code is written using Simics DML, a domain specific-language for device modeling that is used to quickly build device models for Simics:

Teaching the Teacher PCIe Config Space

The template for the register bank provides reasonable defaults for all the mandatory registers, so the declaration here just provides the specifics that makes this device different.  In particular, the various IDs required by the PCI standard (and its PCIe successor) to identify the device so that the correct driver can be loaded. 

Following that, there is setting of the status.c bit. This indicates that the device has PCI capabilities defined. The capabilities are used to declare that the device uses extended message-signaled interrupts (MSI-X interrupts), among other things. The bit was missing in the first drafts of the device model, and as a result the Linux kernel refused to set up IRQs and send interrupts. After some following of the calls in the kernel and looking at the replies from PCIe APIs, as well as a second reading of the PCIe standard, it was clear it was needed.  With a virtual platform, that kind of exploration from scratch is possible in a way that is really difficult to do in hardware. 

After that, the code visible here sets up the base address registers (BARs) that make the programming registers and MSI-X interrupts banks addressable on the processor’s memory bus.  

JE: I remember my own experience building my first device driver a decade ago for a previous project in the same vein. But back then, it was just a plain memory-mapped device and a 2.6-series kernel.  PCIe is rather different... you get a lot more help from Linux, but there are also more things to do right.

FM: Yes, of course. I found PCIe to be a very complete (if not complicated) standard. In the model, I could see a lot of registers that the standard defines and these registers work nicely together with the Kernel to help map the device into the PCIe hierarchy. Luckily I didn’t have to make anything with those registers, the base driver you gave me at the start already had the Kernel connection part worked out, but I had to program the memory-mapping mmap() device driver API calls and the device’s file descriptor read/write functions.

JE: How was it to work with device drivers on Simics?

FM: Once I understood the driver basics (not taking into account the PCIe connection part), it was pretty straight-forward. Since the device driver execution is made in a Kernel execution context, one needs to be careful with the way one codes because it is really easy to make a mistake, but since the device driver we were programming would basically just blink some LEDs in a panel there wasn’t really much room for errors.

JE: Let’s take a slightly deeper look at what we used in the workshop. Here is a screenshot of the setup running (on top of a Fedora* Linux):

Teaching the Teacher Simics Running

The user enters “echo” commands on the target system serial console. These echo commands send characters to the device node for our device driver, which parses the provided strings and then lights up (or turns off) the LEDs in the System Panel as appropriate. In the black “Textual Graphics Console”, we see the diagnostic output from the admittedly rather chatty device driver (it is intended as a teaching tool, after all). 

This shows the path from user applications through the filesystem to the kernel to the device driver to the PCIe memory mapping to the device and finally to the output in the system panel.  A complete pass through the stack – all driven by a plain “echo” command.

JE: In your opinion, what did the participants think of the workshop? 

FM: I believe the participants discovered a tool I’m quite sure they had never seen before, at least here in my country. Simics is an impressive full system simulator, and I’m certain the participants now know this as well. In an academic environment, Simics can be used for subjects like Embedded Systems, Operative System or Computer Architecture. The simulator can help students to understand the concepts taught in the classroom more easily with hands-on experience.

JE: Which parts were the most interesting to the participants?

FM: I think the most interesting part in the workshop, at least for me is watching Simics debugging capabilities. It is impressive how easy it is to dive into the device driver’s source code for inspection and from there, even being able to step through operative system routines was quite amazing the first time I tried it and I’m sure the participants felt the same way. Also, being able to communicate two simulations through network is interesting as well, and demonstrate just hoe powerful Simics can be for hardware and software development. 

JE: We also had an interesting time getting the materials packaged in an easy-to-use way.  USB keys sounded like the best idea… but then we got into some trouble.

FM:Ha ha, yeah. 

We found out the hard way that not all USB keys (even if they are USB 3.0) have the same read/write speeds. We first tested making a bootable USB key (this means, the USB key behaves like an external hard drive and the user boots the entire OS from the USB key as the main storage drive). This worked like a charm for the USB key I used in development. 

We then proceeded to copy this image into newly bought USB keys from a different manufacturer. When we tested these new keys, the performance was very poor. Using the initial USB key, the OS would behave like a normal PC, but with the second type of USB keys, the booting took three or four times as much and the general performance was unusable. 

When I benchmarked the USB keys I found out that while both USB keys had read speeds close to 100 MB/s, the initial USB key had a write speed of 50 MB/s whereas the second set only just reached 15 MB/s! This made it unusable to use as a bootable USB key, since the boot writes quite a bit of data to the disk. It also affected the performance of Simics Eclipse, where some operations would bog down completely due to being write-intensive.

The moral of the story is, if you test something in one environment, always make sure the environment is the same if you want to get the same results.

JE: Thank you very much! 

Closing Remarks

This work shows how Simics can be used as a teaching tool, at all levels of the software and hardware stack down to the internal logic of a hardware design. It does not cover RTL (Register Transfer Level) and implementing the design in actual hardware, but it provides a good way to show how things fit together. 

In my opinion, all students that will work with programming, or computers, or system design in some way should have a basic understanding of the whole software stack. When balancing on top of the stack, it is good to know what is in it. When writing code down at the bottom, understanding what goes on at higher levels is definitely important. When designing hardware, understanding what drivers find useful is very important. Understanding the whole software stack from the hardware interface to the application software is important. And Simics is a great way to get that understanding, with the ability to work at any level.

 


Missed the Intel® AI Academic DevJam? Here are the Top 4 Highlights

$
0
0

Intel AI Academy DevJam Crowd Intro Image

The evening before the Dec. 4-9, 2017 Neural Information Processing Systems (NIPS) Conference, hundreds of students, professors, data scientists and developers packed into The Westin Long Beach for the Intel® AI Academic DevJam.  

Attendees learned how to apply AI/machine learning (ML) and deep learning (DL) to their projects, to hear from Intel's lead AI engineers sharing what's next, to build ML and DL skills with Intel® tools, frameworks and resources, all while having fun sharing ideas with other students, professors, data scientists and developers.

One student attendee told me, “I didn’t expect to see this type of cool event from Intel.” Another simply walked in and said, “Wow!” In case you missed the DevJam, here are the four highlights from the event:

1. Getting Hands-On with AI Technical Demos

There were more than 10 technical demos showing off the latest in ML and DL.  Intel® Innovators shared their latest projects, and Student Ambassadors showed-off their research.  Here are just a couple of examples:

AI DevJam Demo TeamFree skin Cancer check-ups with Doctor Hazel—Intel® Innovator Peter Ma showcased Doctor Hazel, which uses AI to determine if you have skin cancer, as featured in a recent TechCrunch article.  A student attendee told me, “This is inspiring to see ideas like this become reality, and using technology to better the world.”

NASA Frontier Lab Development Lunar Crater Identification—the NASA team showcased AI in space resource exploration, highlighting our collaboration. We partnered on the AI space resource exploration mission challenge: Lunar Water and Volatiles. The purpose of the challenge was to use AI to determine the location and most promising access for vital lunar H2O, in terms of cost effectiveness and engineering constraints. Do it yourself instructions and guides were provided to help DevJam attendees join in on the excitement and recreate their very own lunar crater detector at home.

2. The Intel® Movidius™ Embedded Image Classification Challenge (EICC)

Attendees were invited to sign-up for the EICC. The Challenge is enabling developers to prototype networks and create AI applications at the edge, testing their network training skills by fine-tuning convolutional neural networks (CNNs) targeted for embedded applications. 

Movidius Neural Compute Stick

The total prize is $20,000.  Registration opened Dec 3, and winners will be announced March 15, 2018. Contestants use the mvNCProfile tool with the Intel Movidius Neural Compute Stick to analyze the bandwidth, execution time and complexity of their network at each layer, and tune it to get the best accuracy, execution time and power-efficiency. It was great to see so many people signing up in the booth.

3. Discovering what’s to come in AI

DevJam AmirAmir Khosrowshahi, Vice President of the Intel® AI Products Group, gave a keynote, deep-dive, presentation on the history and future of AI. He highlighted the impact of AI on application evolution, through DL and beyond, across the edge, gateway and cloud data center. He shared insights about new Intel research in DL and AI, and he spoke about Intel’s leadership with government, business, and academic thought leaders to drive the positive impact of AI for the betterment of society.

DevJam Panel

A “Future of Data Science, Algorithms, and Hardware in the Age of AI” panel of Intel AI Products Group experts followed, moderated by Julie Choi, Head of Marketing, with Jason Knight, Platform Architect—CTO Office, Yinyin Liu, Head of Data Science, and Data Scientists Xin Wang, Tristan Webb, and Marcel Nassar. They discussed what Intel is doing to accelerate AI research and data science. You can hear more from YinYIn Liu in her Facebook live interview at the DevJam. 

 

4. Student Ambassador Poster Chats: Three Amazing Talks

The first Student Ambassador talk was by Devinder Kumar from the University of Waterloo. In “Explaining the Unexplained: A Class-Enhanced Attentive Response (CLEAR) Approach to Understanding Deep Neural Networks” Devinder proposed an approach to visualize and understand the decisions made by deep neural networks (DNNs) given a specific input. CLEAR can mitigate some of the shortcomings of heatmap-based methods, and it allows for better insights into the decision-making process of DNNs. You can learn more in his Facebook Live Chat from the event.

DevJam Poster

David Ojika from the University of Florida gave the next talk, “EdgeNN: Deep Learning Inference for High-Speed, Massive Event Data.”  David described an innovative approach to massive-scale inference such as that used by voice-activated assistants, speech recognition, image recognition, spam filtering applications, and online recommendation engines.

The final Student Ambassador talk by Carlos Paradis from the University of Hawaii at Manoa   was called, “PERCEIVE: Identifying Past Concepts to Enable Proactive Cybersecurity.”  Carlos demonstrated LDA topic modelling to identify emerging cybersecurity threats. His approach on Topic Flow ties topics over time to understand emerging and evolving cybersecurity discussion threads.

The Intel® AI Academic DevJam closed with attendees redeeming tokens for Swag Bags, and the event’s social media Twitter Challenge winner was announced—the prize was a ticket to the exclusive AI After-party at the Loft, an invitation-only gathering featuring a performance by rapper, singer and songwriter Flo Rida.

More Intel AI Academic DevJams are planned for 2018. For more information about them and about the Intel AI Academy student program, technical training, and community resources, visit the Intel® AI Academy.

© 2017, Intel Corporation. All rights reserved.

* Intel, the Intel logo, Xeon and Core are trademarks or registered trademarks of Intel Corporation.  Other names may be claimed as the property of others.

Top Ten Intel Software Developer Stories December

$
0
0

Introducing deVR Beat Bulletin

Introducing the deVR Beat Bulletin

Elevate your VR knowledge and stay connected. Get the latest tutorials, case studies, and training for virtual reality, mixed reality, and augmented reality delivered monthly to your inbox. Sign up today.


Intel® HPC Developer Conference.

Intel® HPC Developer Conference 2017 Keynotes and Sessions

Revisit the best from the Intel® HPC Developer Conference. Keynotes and plenary sessions delivered at the conference are now available online.


2017 Modern Code Challenge Winners

Winners Announced for the Modern Code Developer Challenge

We recently announced the winners of our Modern Code Challenge at SC17. Elena Orlova was a winner for her work on improving particle collision simulation algorithms. Konstantinos Kanellis was awarded for his work on cloud-based biological simulations. Read more about the students and the competition.


Create a Remote IoT Service

Code a Service to Remotely Control an IoT Device

Daniel Holmland gives us step-by-step instructions to create a service that will remotely control your IoT device.


Big Digital Sign

What It Takes to Build a Decision Signage Platform

Daron Yondem from XOGO Decision Signage shares how his company built their product using Intel and Microsoft* technology.


Favorite Home Styles

Using BigDL to Build Image Similarity-Based House Recommendations

Read how MLSListings Inc. in Northern California is collaborating with Intel and Microsoft* to create an application that will enhance the home buying experience by sorting listings by users' house style preferences.


Telltale Games

Telltale Games: Storytelling Superstars

We go inside Telltale Games* studio to learn how they revolutionized storytelling gameplay beginning with their game, The Walking Dead.


Deep Learning Training

Intel® Processors for Deep Learning Training

Andres Rodriguez explains how the latest Intel® Xeon® Scalable processors can provide compute power for deep learning training workloads along with optimized functions included in the Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN) library.


VR Developer Tutorial

VR Developer Tutorial: Testing and Profiling the Premium VR Game

Learn to locate CPU or GPU performance bottlenecks in your game with these testing and profiling tools.


Hello Neighbor

tinyBuild’s Odyssey from Indie Developer to Compassionate Publisher

The history that tinyBuild* has building games gives this publisher a unique and compassionate approach to partnering with indie game developers.


Intel® Developer Zone experts, Intel® Software Innovators, and Intel® Black Belt Software Developers contribute hundreds of helpful articles and blog posts every month. From code samples to how-to guides, we gather the most popular software developer stories in one place each month so you don’t miss a thing. Miss last month?  Read it here. 

Top 10-icon

Intel® Software Innovator Johnny Chan: Programmer, Educator, and Open-Source Enthusiast

$
0
0

As someone who is constantly in search of more knowledge, Intel® Software Innovator Johnny Chan is always eager to learn and try new things. He then takes things a step further and shares what he learns with others through his blog and forum contributions in order to help ease the way for others trying to learn new technologies too. Johnny tells us a bit more about himself and the projects he is working on.

TELL US ABOUT YOUR BACKGROUND.

Professionally I’ve been a technologist for a large global investment bank, an analytics consultant for a major UK commercial and private bank, a full-stack developer for a wellness startup, and briefly an engineering intern for an airline. In my spare time I am the creator and author of Mathalope.co.uk (a tech blog visited by 120,000+ students and professionals from 180+ countries so far), volunteer developer of the Friends of Russia Dock Woodland Website, open source software contributor, hackathon competitor, and part of the Intel® Software Innovator Program. I am currently working on becoming a better machine learning engineer and educator.

WHAT GOT YOU STARTED IN TECHNOLOGY?

When I was studying for my masters in aeronautical engineering at Imperial College London, I learnt to write small Fortran/Matlab programs where you could throw at it say satellite data, and it spit out geographical location on Earth. I then started my professional technology career in 2008 for a global investment bank, where I collaborated with colleagues from all 4 regions globally (EMEA, ASPAC, NAM and LATAM), developed and rolled-out a fully automated capacity planning and analytics tool, along with governance and processes - protecting the 100,000+ production systems (including Windows, UNIX, AIX, VM, and Mainframe platforms) from risk of overloading. I built the system with proprietary technologies such as SAS, Oracle, Autosys batch scheduling, Windows/UNIX scripts, and internal configuration databases. In 2014 I decided to learn about open source technologies in my spare time and as a result created Mathalope.co.uk. Since then I’ve learnt to program in more than 10 languages, and my current favorites are Python and JavaScript due to their expressiveness, syntax, and relevance to building modern applications. You can check out my Github contributions here.

Since a year ago I also taught myself machine learning and parallel distributed computing with the help of deeplearning.aiStanford Machine learning courses, Karpathy’s Stanford cs231n Convolutional Neural Networks for Visual RecognitionColfax High Performance Computing deep-dive series, and many other deep learning books and courses online.

WHAT PROJECTS ARE YOU WORKING ON NOW?

I am currently building fungAI.org - a machine learning application with the aim of identifying wild mushroom species from images using deep learning techniques. The project was primarily inspired and motivated by a casual friend’s Facebook post from a walking trip:

“Hey do you know what mushroom this is?”

Coincidentally my partner, who is a conservationist, happens also to be a mushroom enthusiast and so naturally we’ve formed a couple’s team. We think the project will be fun and educational.

You can read more about the project concept, try out an initial ReactJS frontend toy demo, and check out this Intel Developer Mesh Fungi Barbarian Project page. All project source codes are open sourced on GitHub - you may find more Demos and Github repository links here.

TELL US ABOUT A TECHNOLOGY CHALLENGE YOU’VE HAD TO OVERCOME IN A PROJECT.

During the summer of 2015 I spent the entire weekend just trying to get OpenCV-Python, Windows, and the Anaconda package manager to work together for a personal computer vision project. I remember searching really hard on the internet for solutions, trying out many of them, and failed uncountable times. After many rounds of trial-and-errors and investigations I eventually solved the problem by combining multiple “partially working” solutions. In the end I decided to write an article summarizing my solution via a blog post which has since been viewed more than 120,000 times. To increase the range of impact I also posted it as a solution to a Stackoverflow Forum - the forum has so far been viewed over 200,000 times and my solution has received 50+ “good citizen brownie points” upvotes. It turns out many developers around the world had also bumped into similar issues at the time and got the problem solved with the help of the articles.

This experience has taught me an important lesson on making an impact: it doesn’t have to be building the next Google or Facebook - all it requires could be as simple as writing up a summary of how you’ve solved a problem and sharing it online. We only get to live once.

WHAT TRENDS DO YOU SEE HAPPENING IN TECHNOLOGY IN THE NEAR FUTURE?

A recent talk presented by O’Reilly and Intel® Nervana™ in September 2017: AI is the New Electricity by Andrew Ng discussed the trends and value creation of machine learning. This is my one-liner summary taken from Andrew:

Today, vast majority of values across industry is created by Supervised Learning, and closely followed by Transfer Learning

Personally, I am super excited about transfer learning and believe this technique will be used a lot in solving many specialized problems. Say we wish to train a model to recognize different types of flowers for instance. Instead of having to spend months training a model from scratch with millions of flower images, we can take a very massive short-cut: take a pre-trained model like Inception v3 that is already very good at recognizing objects from ImageNet data, use it as a starting point and train that more specialized flower recognition model from there. The end result? You only require about 200 images per flower category, and the training of a new model would take only about 30 minutes on a modern laptop on CPU.

This suddenly makes deep learning very inclusive to everybody

An ultra-powerful and expensive graphics processing unit (GPU) is no longer a “must have requirement” to solve deep learning problems. Transfer learning and open source software together have made deep learning more inclusive and accessible to all. The power of inclusiveness will enable stronger communities, knowledge sharing, and further technological advancement of deep learning in the near future.

HOW DOES INTEL HELP YOU SUCCEED?

Intel supports innovative projects, such as fungAI.org that I’m currently working on, by providing access to state-of-the-art deep learning technologies: Intel® Xeon Phi™ enabled cluster nodes for model training, Intel® Movidius™ Neural Compute Stick for embedded machine learning applications, and more. At a personal level, Intel has provided me access to a community of technology experts and innovators working in artificial intelligence (AI), Internet of Things (IoT), virtual reality (VR), and Game Development - where I get to learn and be inspired from. Recently I was sponsored by Intel to take part in events including the Seattle Intel® Software Innovator Summit 2016 and Nuremberg Embedded World Expo 2017, where I had the opportunity to travel, learn, and contribute to the tech community. I really appreciate the amount of effort the Intel® Software Innovator Program team has put together in enabling long-term success of the innovation community. It has been a privilege and I thank you all for the opportunities.

OUTSIDE OF TECHNOLOGY, WHAT TYPE OF HOBBIES DO YOU ENJOY?

Since 2009 I’ve been playing social mixed-gender non-contact touch rugby and tag rugby leagues here in London. It’s a fun way to socialize and meet new friends in an active way. I would highly recommend this social sport to anybody.

Want to learn more about the Intel® Software Innovator Program?

You can read about our innovator updates, get the full program overview, meet the innovators and learn more about innovator benefits. We also encourage you to check out Developer Mesh to learn more about the various projects that our community of innovators are working on.

Interested in more information? Contact  Wendy Boswell

DeveloperWeek Austin 2017

$
0
0

DeveloperWeek Austin had its first event in 2015, and stays true to its goal of exposing developers, engineers, architects, managers and executives and entrepreneurs to a wide variety of tools and techniques for many development fields including apps, FinTech, Machine Learning, and Virtual Reality (VR) among others. This year’s Austin event was held on November 8th and 9th at the Palmer Springs Event Center.

On the expo floor, Underminer Studios demoed our Virtual Engagements on Vive, “Confronting Fear of Heights,” which is now available for free on Viveport. Envisioned as a treatment option for those looking for an alternative to traditional therapy or medication this self-led process was built with a foundation in cognitive therapies and boosted with tech, gamification, and accessibility as VR hardware becomes faster and cheaper. Our experience has four levels of challenge, where the scenarios become increasingly more realistic and challenging. There is a safeguard known as “calm worlds” that allows for engagement at the user's discretion.

Our demo was the only VR content being shown in the expo so it garnered some extra attention and we met developers with a wide range of experience from those that had not used VR at all to a few that had developed for non-web VR. We offered a challenge for the Hackathon for the best use of the Intel® Graphics Performance Analyzer (Intel® GPA) tool to optimize their projects. We were featured in this Intel article, VR Optimization Tips from Underminer Studios, and are often regarded as experts in optimization techniques.

The conference portion of the event included many presentations on workflow and tools. There was a VR-focused 3-lecture series in DevWeek’s Pro track, which introduced people to open source and open standards, prototyping with ARKit, and WebAR - good ways to get people interested in playing with the foundations of the VR / AR. Underminer Studios set up a VR challenge in the DevWeek Hackathon, though unfortunately none of the contestants were familiar enough with the software to get anything to run.

Learn more about DeveloperWeek Austin:

Visit their website here.

Learn More About Underminer Studios:

Based in Austin, Texas, Underminer Studios has been in the emerging technology space for two years. Utilizing more than a decade of experience, industry connections, and out of the box thinking to create unique products, our team is driven by a passion for impactful uses of technology. As a solution-focused company serving many markets by changing the perspective of how technology can solve real problems and shape the future with leading-edge solutions. 

The Fab Five: Game Developer Content December

$
0
0

Dungeons 3

Dungeons 3 Takes Warcraft and Dungeon Keeper and Makes Something New

User expectations that the successor to Dungeon Keeper would have similar aspects made game developer Realmforge* Studios dig deeper to make a more compelling game in Dungeons 3.


Pricing Your Application

More than a Price Tag: How to Price Your App

Get tips to help you find the “sweet spot” for pricing your application.


Introducing deVR Beat Bulletin

Introducing the deVR Beat Bulletin

Elevate your VR knowledge and stay connected. Get the latest tutorials, case studies, and training for virtual reality, mixed reality, and augmented reality delivered monthly to your inbox. Sign up today!


Secret World Legends

How Funcom* Reinvented a Five-Year-Old MMO with Secret World Legends

Implementing the free-play model and putting a lot of thought into changes for the original game gave Funcom* the boost it needed to reintroduce the updated Secret World Legends.


Boss Keys Lawbreakers

Boss Key’s Lawbreakers: The Return of Cliff Bleszinski

Starting out when he was 15 years old, game developer superstar Cliff Belszinski racked up impressive wins with the Unreal* Series. He decided to retire early but that didn’t last. Read the story of his return to the arena.


Get ready.  Get noticed. Get big.  Get news you can use by joining the Intel® Software Game Developer Program.
Viewing all 1751 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>