Quantcast
Channel: Intel Developer Zone Blogs
Viewing all 1751 articles
Browse latest View live

AI Calorie Counter: A Machine Learning App by Intel® Student Ambassador Pallab Paul

$
0
0

Over the summer, I was given an opportunity through the Intel® Student Ambassador Program for Artificial Intelligence (AI) to work on an Early Innovation Project called Face It which is a mobile application that uses machine learning to help a user decide on what hairstyle to choose. Using the knowledge I gained from this project, my partner, Roshni Shah and I created a very similar application with a different use case during an internship at Rutgers Wireless Information Network Laboratory (WINLAB). Our project is called the ‘AI Calorie Counter’ and it is a diet application that helps a user keep track of what foods he/she eats and how many total daily calories he/she consumes.

To start using the application, the user must first create an account that is saved to a database. When creating the account, the user will have to input various information including their age, activity level, current weight and goal weight. This information is used to calculate a specific amount of daily calories the user would need to consume in order to lose weight or gain weight depending on his/her goal.

After the user creates his/her account he/she can access the rest of the features of the application which includes the food scanner and the food log. To use the food scanner the user would click on the ‘camera’ button and hold a food item that he/she is about to consume in front of the mobile device’s camera. The food scanner would then recognize the food item that is being presented to it and display the amount of calories within that food item. After seeing the amount of calories in the food item, depending on whether or not the user still chooses to eat the item he/she can log the food item and calorie amount into the food log feature of the application. On the food log screen the user can view his/her total daily calories and whenever a new food item is added not only would the name of the food item and the amount of calories within the food item be recorded through the database, but the amount of calories within that food item would also be subtracted from the user’s total daily calories. The user’s leftover calories would be displayed on the food log screen as well.

Using our application, the user can not only detect and view how many calories are in a certain food item, but he/she can also keep track of his/her daily caloric intake and live a healthier life in general.

A key component of the application is the food recognition feature of it which is done using machine learning. A convolutional neural network (CNN) is specifically used to complete this task. We chose a convolutional neural network because the architecture of a CNN is the best for image recognition tasks. CNN architectures are inspired by biological processes and include variations of multilayer preceptors that result in minimal amounts of preprocessing. In a CNN, there are multiple layers that each have distinct functions to help us recognize an image. These layers include a convolutional layer, pooling layer, ReLU layer, fully connected layer and loss layer.

Source: https://www.mathworks.com/help/nnet/convolutional-neural-networks.html?requestedDomain=www.mathworks.com

- The convolutional layer acts as the core of any CNN. The network of a CNN develops a 2-dimensional activation map that detects the special position of a feature at all the given spatial positions which are set by the parameters.

- The pooling layer acts as a form of down sampling. Max Pooling is the most common implementation of pooling. Max Pooling is ideal when dealing with smaller data sets which is why we are choosing to use it

- The ReLU layer or the Rectified Linear Units layer is a layer of neurons which applies an activation function to increase the nonlinear properties of the decision function and of the overall network without affecting the receptive fields of the convolutional layer itself.

- The Fully Connected Layer, which occurs after several convolutional and max pooling layers, does the high-level reasoning in the neural network. Neurons in this layer have connections to all the activations amongst the precious layers. After, the activations for the Fully Connected layer are computed by a matrix multiplication and a bias offset.

- The Loss layer specifies how the network training penalizes the deviation between the predicted and true layers. We believe that Softmax Loss is the best for our project as this is ideal for detecting a single class in a set of mutually exclusive classes.

For the dataset approximately 50 images of each food item was collected to be passed through the CNN. These images consisted of the food item appearing in various sizes and orientations so that it would be able to recognize a new image of the food item no matter what angle it was presented at.

There are a lot of improvements to make on this application and we have a lot of future plans. One feature we would like to improve upon is the calorie detector. Currently the number of calories being displayed is only the average amount of calories for the given food item. This method is not very accurate because a large apple would obviously contain more calories than a small apple. One way to improve this issue is by detecting the volume of the presented food item. There is a method called ‘space carving’ that detects the volume of 3D objects that we would like to look into and possibly implement in the future. We would also like to increase the number of food items that the food scanner can recognize so that people can use this application for any meal, common or exotic. Currently, the CNN model is only trained on 15 common food items but expanding this list is definitely something we would like to do. The last major improvement we would like to possibly add is a fitness aspect to our application where the user will be recommended certain daily exercises that he/she can perform and where the user can keep track of how many calories are being lost from each exercise.

Currently you can view more details about this project on its here and you can view a video of the working application here. This project is open-source and if anyone is interested in playing around with the code or helping us implement one of our future improvements that we would like to make, feel free to download the source code on GitHub here.


Develop and Test SAP HANA-based Applications on an Intel NUC Mini-PC

$
0
0

SAP Hanna Intro Image - IT Ops Team

SAP is well-known as the originator of “enterprise resource planning” (ERP), the business process automation software that manages back office functions through a system of integrated applications. Another game-changing innovation from the enterprise software leader is the SAP* HANA* in-memory data platform. It provides a lightning-fast foundation for critical enterprise applications such as Finance, Sales & Distribution, CRM, Supply Chain Management, HR, Big Data Analytics, and more.

SAP Hanna diagram drawing

SAP HANA accelerates business processes, delivers more business intelligence, and simplifies IT environments because it processes transactions and analytics in-memory, on a single data copy, to deliver real-time insights from live data.  

With SAP HANA, developers can quickly prototype, validate, build, and deliver smart and modern applications using flexible tools in the cloud or on premise.  But production SAP HANA systems typically require high-end infrastructure to support mission-critical business applications, and IT departments may be reluctant to carve out shared space or provide costly, dedicated systems for use as application development playgrounds.  To address that, SAP Senior Director of Global Developer Relations Craig Cmehil started experimenting about a year ago with a portable, self-contained SAP HANA development system.

A portable, self-contained development system

Craig took a page out of history: since the late 1970’s, PCs running high-level languages and interactive software tools have offered programmers untethered freedom. Developers use PC-based tools to develop and test enterprise applications without impacting production environments, producing higher-quality code, faster than ever before.

Craig Cmehil Headshot


 

 

 

 

Craig Cmehil, SAP Senior Director of Global Developer Relations

Craig applied that approach using SAP HANA, express edition—a modified version of the full SAP HANA product—on an Intel® NUC mini-PC.  

SAP HANA, express edition provides optimized in-memory SAP HANA capabilities for resource-constrained environments, while still offering a rich set of capabilities. Developers can install and use SAP HANA, express edition for free on systems with up to 32 GB of memory—a perfect fit for a small but powerful Intel NUC mini-PC decked out with lots of RAM and a speedy SSD.

Intel NUC devices are tiny, affordable PCs packed with high-end specs. For example, the Intel NUC Kit NUC6i7KYK (also known as “Skull Canyon”) is an 8.5-inch by 4.5-inch by 1-inch PC built with a quad-core Intel® Core® i7 processor and Intel® Iris™ Pro graphics. With up to 32 GB of RAM and a lightning-fast, high-capacity SSD, you can use the mini-PC for serious gaming, for graphics-heavy content-creation, or even for running heavy, enterprise-class workloads.

Intel NUC mini-PCs deliver the performance needed to support an SAP HANA in-memory database. The combination opens new doors for application developers who can now develop applications on an Intel NUC with SAP HANA, express edition and easily transfer them to Intel® Xeon® Platinum processor-based systems to run them in production.

Agile application creation and testing  

Craig discovered he could set up an isolated platform for agile application creation and testing that did not interfere with production systems or eat up IT resources. And because Intel NUC mini-PCs are small enough to fit in a coat pocket or backpack, Craig could work remotely, from nearly anywhere. 

For example, Craig has demonstrated an SAP HANA-based inventory management solution at the shops of small vendors who supply major manufacturers. The small shops can use Intel NUC-based SAP HANA, express edition “appliances” connected directly to manufacturer customers’ SAP systems, simplifying supply chain management for both parties. That use case can save the suppliers money while maintaining their on-premise control and security. “If it’s a small shop, they may not want to invest in data center- or cloud-based solutions,” Craig said. “It can take a lot of stress off the small, family supplier. All they have to do is enter the data.”  

Other scenarios that Craig has seen in the real-world include university research projects. For example, a university in the USA is currently using sensor technologies connected to the Intel NUC running SAP HANA, express edition. The portable systems connect to the SAP Cloud Platform which does Big Data analytics on the massive volume, variety and velocity of unstructured data to rapidly generate predictions and insights. “We’re looking at thousands of daily downloads of the free SAP HANA, express edition for uses like that,” Craig noted.

Another use case Craig found compelling was at a friend’s dentist’s office. “Since I live in a small town in Germany, I know a lot of small business owners there,” Craig explained. “I’ve shown a dentist friend of mine a way to improve the diagnosis of mouth pain by using the image classification model from SAP* Leonardo that is accessible via SAP HANA, express edition on an Intel NUC, connected to her dental imaging system.” He showed her a new way to evaluate X-rays without spending the time and money to consult a specialist, and the portable, self-contained system works without risking patient confidentiality on the public cloud. According to Craig, medical doctors and other professional service providers who use imaging devices also can save time, money and improve customer confidentiality from that approach.

Looking to the future, Craig sees great possibilities with portable, self-contained SAP HANA systems using the massive memory capacity made possible by Intel® Persistent Memory (also known by its code-name, “Apache Pass”).  Intel recently demonstrated that technology with the SAP HANA in-memory platform at the SAP Sapphire conference in Orlando, where Intel’s Lisa Davis, Vice President of IT Transformation, presented the first public demo of the technology. Big, affordable, persistent memory will be a game-changer for SAP HANA users. More data in-memory equates to better, faster insights and more business velocity.  

Try SAP HANA, express edition on an Intel NUC mini-PC

If you’d like to try out this versatile development platform for yourself, download the free trial version of SAP HANA, express edition and the how-to guide with all the information you need to get started with SAP HANA, express edition on an Intel NUC mini-PC.

If you’re going to SAP TechEd 2017, September 25-29 in Las Vegas, stop by Intel booth #100 to see a demo of SAP HANA, express edition in action on the Intel NUC mini-PC.  

You can follow along with Craig as he continues his experiments with Intel NUC mini-PCs. And don’t forget to follow me and my growing #TechTim community on Twitter: @TimIntel.

About Tim Allen

Tim Allen Head shot Cropped ImageTim is a strategic marketing manager for Intel with responsibilities for cloud, big data, analytics, datacenter appliances and RISC migration. Tim has 20+ years of industry experience including work as a systems analyst, developer, system administrator, enterprise systems trainer, and marketing program manager. Prior to Intel, Tim worked at Tektronix, IBM, Intersolv, Sequent, and Con-Way Logistics. Tim holds a BSEE in computer engineering from BYU, PMP certification, and a MBA in finance from the University of Portland.

View all posts by Tim Allen

*Intel, the Intel logo, and Xeon are trademarks or registered trademarks of Intel Corporation.  Other names may be claimed as the property of others.

Global IoT DevFest Returns – Bigger & Better

$
0
0

Submit Papers & Register for Online Talks by ~100 Industry Experts Over 32 Hours of Developer Training and Mentoring

 

If you benefited from the first IoT online forum in June sponsored by Intel and WiththeBest, there’s good news – Global IoT DevFest II picks up Nov. 7-8 with more of what you expect from a worldwide virtual event: more keynotes, more presenters, more demos, more 1:1 mentoring, and even more hours in a day.

Global IoT DevFest IIwill again provide a worldwide platform for industry thought leaders, innovators, developers, and enthusiasts to contribute their knowledge and visions, conduct deep-dive training, and highlight real-world use cases of IoT solutions in action.   A total of 16 hours for each of two days. 

Over the course of 32 hours – doubled from the June event to accommodate participants in various time zones – IoT developers of all experience levels will share  their IoT journey; teach and learn in a host of session topics; and grow their developer skills through 1:1 mentoring opportunities.  Collectively, DevFest II participants will showcase cutting-edge research and innovation as they advance the state of IoT development.

 

Global IoT DevFest debuted last June with a premiere online event that drew participants from 94 countries to hear 46 addresses from leading IoT experts.  Like the June event, Intel hosts DevFest II at no charge to attendees. This learning opportunity is one more example of Intel investing in training, tools and other resources to help committed IoT developers expand the Internet of Things.

 

Speaker candidates:  Submit your abstracts

Got a game-changing IoT application you’d like to showcase?  IoT experts and innovators who wish to be considered DevFest II speakers are invited to submit suggested topics by Sept. 27.  Presentations for DevFest will fall within four tracks:

Track 1: Developing IoT Solutions for a Connected, Smart, and Autonomous World 

This track introduces the three phases of IoT development: connecting the unconnected, creating smart and connected things, and building a software-defined autonomous world. It also covers the IoT ecosystem and the latest developments affecting it by highlighting emerging technologies, solutions and trends. Get a closer look at Intel’s array of developer tools, SDKs, technologies and other resources – an entire suite of ingredients that are available to help you build IoT solutions while unlocking new value across all the phases of IoT.

Track 2: Architecting, Integrating and Managing IoT Solutions 

An IoT platform must connect devices, collect and analyze data, meet scores of standards, be able to scale to countless devices and messages, and be easily managed. This track focuses on overcoming the challenges that IoT solution developers and system integrators typically face in architecture, design, development, integration, optimization, and management of their IoT solutions. It also covers how to enable the full potential of IoT by addressing security and privacy challenges through a combination of, education and good design. In the process of doing all these things, the IoT platform become a force for business transformation.

Track 3: Data Analytics and Artificial Intelligence 

The Internet of Things represents huge opportunities for developers as everyday items evolve beyond simple connectivity, becoming actually intelligent. Artificial Intelligence is handling ever-greater amounts of data to improve the learning environment and increase the possibilities of what can be done with edge and cloud analytics. And, it’s combining multiple data streams to identify patterns and deliver useful context than would be otherwise available. This track delves into the big promise of IoT, big data, AI and related technologies to generate actionable insights for better decision making, integrate the physical and virtual worlds, and improve the human condition.

Track 4: Uncovering Real Business Opportunities from the Evolution of IoT 

This track moves IoT out of the lab and into the real world of practical use cases, disruptive new business models, and fresh ways of enabling customers to interact with their professional and personal environment. Organizations from small businesses to enterprise organizations are embracing IoT within their workflows -- from freight tracking and asset management to use cases applying smart videos in a variety of industries like retail and manufacturing. Learn how the world’s most forward-thinking companies are finding IoT applications within their operation.

Apply to speak at the online IoT Global DevFest II Nov. 7 and 8 during 32 hours of continuous developer training, sharing and knowledge building.

Register now to join fellow IoT developers at the virtual Global IoT DevFest II as we explore all things IoT, sharing knowledge, tools and training to connect the unconnected, create smart and connected things, and build a software-defined autonomous world. 

MeshCentral2 - Cupcakes Update

$
0
0

Quick note to say that lots more updates and fixes are going on with MeshCentral2. Updates are coming pretty much every day. Yesterday, I added server self-update capability. You can now update the server with just a few clicks on the web site (if you are administrator). Hopefully, we are getting close to Beta2 that will be quite usable for day-to-day use.

Also, at work this morning, I had MeshCentral2 cupcakes. Figured I share a picture since they look so yummy!

Ylian
http://www.meshcommander.com/meshcentral2

Top Ten Intel Software Developer Stories | September

$
0
0

Cat Quest

Winners of the 2017 Intel® Level Up Game Developer Contest

From puzzles to role playing, meet the top five winners in the five genres of the Intel® Level Up Game Developer Contest.


TensorFlow*

Profiling TensorFlow* Workloads with Intel® VTune™ Amplifier

We show you how to combine the data provided by the TensorFlow* library with options available in the Intel® VTune™ Amplifier to help optimize performance.


Andres Rodriguez

Intel® HPC Developer Conference: Get Enabled

Don’t miss Andres Rodriguez talk “Enabling the Future of Artificial Intelligence” at The Intel® HPC Developer Conference. It’s coming soon so register now!


VR Developer Guide

VR Content Developer Guide

Get general guidelines for developing and designing your virtual reality (VR) application and learn how to obtain optimal performance using Intel® Core™ i7 processors.


UPM Sensor Library

Discover the UPM Sensor Library's New Website

UPM, a high-level sensor library for the Intel® IoT Platform, has a new website. Explore 400+ supported sensors, along with easy-to-find code samples, sensor specifications, datasheets, and more.


Vectorization for Games

Use the Intel® SPMD Program Compiler for CPU Vectorization in Games

Learn how easy it is to migrate highly vectorized GPU compute kernels to vectorized CPU code by using the Intel® SPMD Program Compiler.


Baggage Detection

Unattended Baggage Detection Using Deep Neural Networks in Intel® Architecture

The need for sophisticated security is growing. We discuss an application for image classification using Microsoft Common Objects in Context*.


Software Innovators at SIGGRAPH

Software Innovators at SIGGRAPH

Take a look at the cutting-edge work three Intel® Software Innovators presented at SIGGRAPH.


IoT Protocols

Developing for Intel® Active Management Technology

Discover how Intel® Active Management Technology can make remote management of computers much easier.


CERN

Announcing the Intel Modern Code Developer Challenge from CERN* openlab

Follow the five exceptional students participating in the CERN* openlab Summer Student Programme who are working to research and develop solutions for five modern-code-centered challenges.


Intel® Developer Zone experts, Intel® Software Innovators, and Intel® Black Belt Software Developers contribute hundreds of helpful articles and blog posts every month. From code samples to how-to guides, we gather the most popular software developer stories in one place each month so you don’t miss a thing.  Miss last month?  Read it here. 

Top 10-icon

The Fab Five: Game Developer Content | September

$
0
0

The Future of Holograms

Bah VR! Holograms Are the Future

Graphics processing without using a graphics card? Find out how Euclideon* is planning to make that happen.


Mount and Blade

Mount & Blade is a Much Bigger Deal than You Think

Amagan Yavuz, CEO of TaleWorlds Entertainment*, describes the fascinating journey that became the gaming phenomenon Mount & Blade.


The American Dream

Chasing the VR American Dream From Australia

Developed by an Australian indie team, what started as a joke turned into a game that's sure to be met with loud opinions from across the political spectrum.


Man on A Mission

One Man on a Mission

After building games for the military, Sorob Raissi headed out to create his own with Spread Shot Studios*.


Zafari

First Animated TV Show Made in Unreal 4*

The first animated show made within Epic's Unreal 4*, Zafari celebrates diversity using special wild animal characters.


 

Cells in the Cloud: Distributed Runtime Prototype Implementation

$
0
0

Hello, everyone! In the previous part of this blog post series, we presented the nature of the simulations performed by the BioDynaMo project. Moreover, we observed how our desired requirements and constraints for the distributed runtime affected the design of the architecture and defined the choice of tools/frameworks.

Today, we will present some of the technical details of the distributed runtime prototype.

The Majordomo pattern

Majordomo is one of the reliable request-reply patterns described in the ZeroMQ guide. It provides a service-oriented queuing mechanism, where a set of clients can send messages to a set of workers. Each worker can register to one or more services, to indicate that it can serve particular requests. An intermediate node called broker (a.k.a master node in our case) is responsible for handling the messages either from clients or workers.

Majordomo implementation

Because the broker is the essential component of the architecture, it has some extra responsibility. At first, it manages the connections to workers and clients, respectively. For that purpose, it uses a single ROUTER socket (asynchronous reply ZeroMQ socket), which can send and receive messages from multiple clients and workers. Thus, each client can send messages to a specific worker, using the worker's unique identifier (i.e., service name) for routing. Using the same mechanism the worker can send a reply back to the client.

In addition to the routing mechanism, the broker makes use of heartbeats to detect failures across workers. Every few seconds it sends messages with no payload to each worker. If the worker replies "on time" it means that it is still alive and is capable of serving future requests. If it does not, the broker retries a few times, to minimize the possibility of a network-related issue.  If there is still no answer, the broker assumes that the worker node is offline, and does the cleanup.

Modifying and extending the pattern

Because the sample implementation of the Majordomo pattern was written in C99, I first had to port the code to C++14 to work in conjunction with the existing codebase. To do so, I used the ZMQPP high-level bindings, which provide a nice abstraction over the raw ZeroMQ library functions.

Even though the Majordomo pattern is a solid foundation for what we are trying to build, we still have a couple of issues to address. Namely, the worker-to-worker communication and the creation of a middleware layer that exposes a communication interface to the application.

To implement the former, we use a DEALER-ROUTER socket pair between neighboring workers. By convention, the right worker creates a ROUTER socket and acts a server (i.e., binds to an IP address), and the left worker creates a DEALER socket and acts as a client (i.e., connects to this IP address). This way, neighboring workers can exchange their halo regions (described in the previous blog post) asynchronously. Also, we properly exploit the power of ZeroMQ by using a single socket for sending and receiving the data as well as a single network endpoint. Our new connection diagram is shown below:

New architecture

Next step is the implementation of the middleware layer. The goal of this layer is to abstract the low-level network communication details from the application itself (i.e., the simulation engine), exposing at the same time a simple interface to the application.

Middleware network layer

Because we want to send/receive messages to/from other nodes (e.g., broker and neighboring workers) during the simulation computations, we spawn a separate thread (namely network thread) to deal with all the communication issues. By using the ZMQPP Reactor class, this thread initially waits until there is input available from any of the registered socket objects (a.k.a file descriptors). Thus, the network thread wakes up and deals with the node communication, only when needed, without wasting valuable CPU cycles in the process. The reactor pattern is summarized in the following diagram. The diagram is taken from here where you can also find more information about the pattern itself and some of its extensions.

Reactor pattern

Using the reactor class, we first register the ZeroMQ sockets responsible for communicating with the broker and the workers. We also register a PAIR-PAIR ZeroMQ socket, to act as a pipe between the network thread and the application thread. This pipe is then used to signal the network thread when the application wants to send a message over the network. Note that we do not write the message to the pipe itself as this would be expensive when dealing with large messages (i.e., hundreds of megabytes); instead, we pass a unique_ptr (shared address space) to the message itself. Then, the network thread runs on a loop (Dispatcher) to handle the incoming requests. The ZMQPP Reactor class manages the selection (Synchronous Demultiplexer) and the execution of the correct Event Handler, internally.

To further simplify the implementation of the middleware layer, we define an interface (Event Handler) for the classes that handle the actual communication to the broker and the neighbor workers. This interface also encapsulates the ZeroMQ socket object (Handle) itself.  Thus we define a Communicator interface with BrokerCommunicator and WorkerCommunicator being the concrete implementations (Concrete Event Handlers) of this interface. The reactor is now able to call the Handle class method from the interface, ignoring all the internal details. Now we have a flexible, modular and efficient prototype ready to be tested!

You can find the implementation of the distributed runtime prototype at Github.

Thanks for reading!

“Virtual Platform Checkpointing @ SystemC* Evolution Day 2017”

$
0
0
SystemC Evolution Day 2017 Munich Germany logo big
The SystemC* Evolution Day 2017 is happening in München next month (on October 18). Just like last year, it is right after DVCon (Design and Verification Conference) Europe, and we expect to see a lot of people interested in design, verification, and virtual platform work with SystemC attend.  Several of my Intel colleagues and I will be running a session around virtual platform checkpointing and how we can bring that to the SystemC world. 
We will combine our experience with Simics* and SystemC with the best minds in the SystemC world to get a better understanding of the problem and possible solutions. Way back in 2009, I actually put together a prototype solution for this together with Marius Monton

Checkpointing? 

Checkpointing and how it can help optimize workflows and make virtual platforms more useful is a topic dear to my heart (for example see my blogs on Checkpointing: Meaningless, Difficult, or just Overlooked?, and  Checkpointing in SystemC @ FDL). It can mean different things to different people, and one thing I want to bring to the discussion is a broad take on just what checkpointing is and what we want to use it for.
To me, checkpointing can be defined as follows: “the ability of a virtual platform or virtualization environment to save the state of an executing simulation to storage and later bring the saved state back and continue the simulation as if nothing had happened from the perspective of the simulated system.” To be really useful, we also need to add the qualification:“…by any users, on any machine, at any point in time.” 
Given this, we can do a lot of interesting and useful things. Here is an infographic capturing the most common ones:
SystemC Evolution Checkpoint Poster margins
I have written a lot about various use cases in the past. Few of them are listed below:

See You in München

I will be there for both DVCon and the SystemC Evolution Day. 

 

Green Man Gaming and Intel Help PC Game Developers Grow

$
0
0

Green Man Gaming window

Launched in 2010 in London, Green Man Gaming is an eCommerce technology business and a video games publisher that supports independent development studios to market their games globally. The online store and community offers the latest game insights and information, and more than 8,500 digital multi-platform games from 550 publishers to gamers in 195 countries. 

Millions of gamers worldwide discuss, discover and share all things gaming within a highly engaged community at greenmangaming.com. This includes unique game data tracking, reviews, top Twitch streamer videos as well as expert insights available on Green Man Gaming’s game hubs, blog and newsroom. Individual and community gameplay data is available on the website including total hours played, full game library and game achievements. Green Man Gaming’s multi-platform game data tracking is a unique offering in the video games industry. 

The community includes more than 3,700 Twitch streamers and YouTube content creators known as the Green Team. They are important ambassadors among gamers worldwide and highly influential for Green Man Gaming and the games they sell. 

The company’s industry expertise and knowledge provides independent developers with hands-on support that includes in-depth market analysis, integrated marketing and PR campaigns, finance options and strategic global retail opportunities. Green Man Gaming Publishing has helped indie developers launch 21 games since 2014. Highly-rated games published by Green Man Gaming Publishing include The Black Death, Aporia and the BAFTA Cymru Award nominated game The Bunker.
 Green Man Gaming Lab

Leveraging its award-winning, patented technology, Green Man Gaming also works with leading global technology brands such as Intel and premium hardware reseller partners to support marketing initiatives in the PC gaming market. Announced at CES 2017, the Green Man Gaming digital storefront is currently available on millions of Lenovo laptops worldwide through the Lenovo Entertainment Hub.  Green Man Gaming has also worked with Intel on marketing partnerships targeted at PC gamers, joint promotions with Intel reseller partners as well as Intel sponsored game hubs in partnership with game publishers featuring the biggest games in the market.  

“It is essential that we enhance the experience of our gamers on a direct community relationship level, but also from a business-to-business perspective,” said Green Man Gaming Founder & CEO Paul Sulyok. “Our Intel relationship has grown very strongly from experimental efforts where Intel helped promote developers in the marketplace to where we now work together to ease the friction, looking at B2B relationships with hardware manufacturers and resellers. The relationship is built on a common vision on where we’d like to go with our customers.”
 Green Man Gaming CEO Paul Sulyok
Paul Sulyok, Founder & CEO, Green Man Gaming

Paul explains, “We began talking to Intel in 2013. Initially, it started as a marketing relationship to promote products to high-end enthusiasts who come to Green Man Gaming to get premium titles. Later, we started doing co-promotions with Intel partners, and then we began working with them on B2B initiatives that help game developers reach more gamers through PC promotional campaigns.”

When asked how he would sum up the nature of the Intel/Green Man Gaming relationship, Paul said, “We’ve had a great experience working with Intel over the last 18 months and look forward to growing it further. We share a common desire to help gamers improve their gaming experience and to support developers to attract new audiences to their games. As with all great relationships, it’s built on trust, it’s built on experience, and it’s built on the people who are working together.”

If you are a game developer interested in learning more about Green Man Gaming Publishing, visit https://www.greenmangaming.com/about-us/green-man-gaming-publishing/ If you would like to get your game sold on the Green Man Gaming store, please visit https://www.greenmangaming.com/publishers-and-developers/ 

Learn more about how game developers can work with Intel to get ready, get noticed and get big at https://software.intel.com/en-us/gamedev

#  #  #

© 2017, Intel Corporation.  Intel, the Intel logo and Core are trademarks or registered trademarks of Intel Corporation.  All other names may be claimed as property by others.

Modernizing Software with Future-Proof Code Optimizations

$
0
0

by Henry A. Gabb, Sr. Principal Engineer, Intel Software and Services Group

Bull image

Create High Performance, Scalable and Portable Parallel Code with New Intel® Parallel Studio XE 2018

Intel® Parallel Studio XE is our flagship product for software development, debugging, and tuning on Intel processor architectures for HPC, enterprise, and cloud computing. It is a comprehensive tool suite that contains everything from compilers and high-performance math libraries all the way to debuggers and profilers for large-scale cluster applications. These tools enable developers to exploit the full performance potential of Intel® processors. Intel Parallel Studio XE is designed to help developers create high performance, scalable, reliable parallel code—faster.

Intel Parallel Studio XEThe latest release, Intel Parallel Studio XE 2018, contains many new and interesting features [1]. Let’s start with parallelism. It’s in the product name, after all. Software development and parallelism used to be separate concerns, and parallel computing was mainly confined to high-performance computing practitioners. Today, however, parallel architectures are ubiquitous. Multicore processors are now in handheld devices—all the way up to the world’s most powerful supercomputers.

The Intel® Compilers support the OpenMP* 4.5 standard for compiler-directed multithreading, plus initial support for the 5.0 draft. OpenMP is now 20 years old and continues to evolve with new hardware architectures [2, 3, 4]. The latest versions provide computation offload to accelerator devices, vectorization directives, enhanced control of thread placement, and much more [5]. For distributed-memory process-level parallelism, the Intel® MPI Library supports the latest message-passing interface (MPI) standard, and contains many optimizations for collective communication, job startup and shutdown, and support for the latest high-speed interconnects like the Intel® Omni-Path Architecture (Intel® OPA). Combining OpenMP and MPI in the same application has proven to be a powerful way to achieve scalable parallelism on modern clusters.

The number of cores per socket has steadily increased since the first multicore processor was released, but while higher-level parallelism is important, lower-level code tuning should not be ignored. In fact, parallelizing code that has not been properly tuned can be counterproductive. There are few things more disheartening than going through the effort of parallelizing an application only to find that vectorizing a few key loops gives better performance and renders the previous parallelization unnecessary. Vectors continue to get wider in modern processor architectures so the Intel compilers contain many new enhancements to enable efficient vectorization [6]. In addition to the OpenMP vectorization directives mentioned above, the Intel compilers exploit the latest Intel® Advanced Vector Extensions (Intel® AVX-512) instructions in Intel® Xeon® Scalable and Xeon Phi™ processor architectures [7].

The compilers in Intel Parallel Studio XE 2018 support the latest Fortran, C, and C++ standards. More recently, the Intel® Distribution for Python* was added to the suite. Our optimized Python distribution integrates the Intel® Performance Libraries into many Python packages (e.g., NumPy, SciPy, scikit-learn, mpi4py). (Other productivity languages like Julia* [8] and R* [9, 10], which are not part of the product, can also take advantage of the Intel performance libraries.) Intel Parallel Studio XE 2018 also includes the following highly-optimized libraries: Intel® Math Kernel Library (Intel® MKL), Intel® Integrated Performance Primitives (Intel® IPP), the Intel® Data Analytics Acceleration Library (Intel® DAAL), the Intel® MPI Library, and the Intel® Threading Building Blocks (Intel TBB). Intel® MKL provides tuned, parallel math functions for dense and sparse linear algebra, Fourier transforms, neural networks, random number generation, basic statistics, etc. The latest version contains new APIs to improve the performance of the bulk matrix multiplication and convolution required during neural network training. Common computations in image processing, computer vision, signal processing, compression/decompression, cryptography, and string processing are available in Intel® IPP [11]. The newest library in the suite, Intel® DAAL, supports basic statistics and machine learning (e.g., dimensionality reduction, anomaly detection, classification, regression, clustering) [9, 12, 13, 14].

Intel Parallel Studio XE wordcloudFor C++ programmers, Intel continues to support Intel® TBB (www.threadingbuildingblocks.org), the widely-used template library for task parallelism [15]. (Note that in spite of the name, Intel TBB is open-sourced under an Apache 2.0 license. Intel has always preferred open, vendor-neutral standards over proprietary programming models.) Intel TBB fully leverages multicore processors but its most exciting new feature is the flow graph coordination layer. Flow graph allows the programmer to describe complex workflows that the Intel TBB runtime uses to extract parallelism. Intel TBB flow graph could become the preferred parallel programming model for heterogeneous processor environments. Intel Parallel Studio XE 2018 contains a preview feature under Intel Advisor called Flow Graph Analyzer to help create optimize flow graphs [16].

In addition to compilers and performance libraries, Intel Parallel Studio XE 2018 contains powerful code analysis tools to assist with debugging and tuning at instruction-, thread-, and process-level parallelism. Intel® Inspector is a one-of-a-kind debugger that not only finds garden-variety bugs like memory leaks but also performs correctness checking on threaded code to identify data races, potential deadlocks, and other non-deterministic concurrency errors. Intel® VTune™ Amplifier provides basic profiling to find performance hotspots but it does so much more, e.g.: microarchitecture analysis, memory and I/O analysis, etc. Its latest release adds support for profiling applications running in containers and the new Application Performance Snapshot feature provides a one-page overview of an application’s efficiency and performance characteristics across MPI, CPU, FPU, and memory use. Intel® Advisor, another one-of-a-kind tool, allows users to quickly prototype regions for potential parallelism and project likely speedup. However, its most exciting new feature is cache-aware roofline analysis, which pinpoints underperforming loops, graphically shows which are good candidates for code tuning, and gives advice about the likely performance bottlenecks [6, 17]. The Intel® Trace Analyzer and Collector performs correctness checking and communication profiling of MPI applications.Its latest version now supports OpenSHMEM (www.openshmem.org), an open standard API for parallelism in a partitioned global address space (PGAS). PGAS could become an important programming model for future parallel systems. Finally, Intel® Cluster Checker, a tool for analyzing the cluster health, added new features to improve usability and diagnostic output, check Intel® Omni-Path Architecture (Intel® OPA), and much more [18].

Few Intel Parallel Studio XE users realize how much this tool suite has evolved, how mature some of its components really are (20 years+), and how it has driven new approaches and helped developers accelerate parallel programming performance significantly over the last decade. However, its design goal has remained the same – to enable future-proof code modernization. For example, the same cache optimization techniques (e.g., blocking and tiling) that were beneficial 20 years ago are still beneficial. Today, however, code modernization is about exploiting parallelism – starting with vectorization (instruction-level parallelism), then threading, and finally message-passing on distributed-memory clusters. What does the future hold: heterogeneous parallelism, PGAS languages, persistent memory, etc.? Whatever the future holds, Intel Parallel Studio XE will evolve accordingly.

More Resources

References

The following articles were published in recent issues of The Parallel Universe. Get future issues: Subscribe Today

  1. Jackson Marusarz “Modernize your code for performance, portability, and scalability: What’s new in Intel Parallel Studio XE 2018” The Parallel Universe #30.

  2. Rob Farber “Happy 20th Birthday, OpenMP: Making parallel programming accessible to C/C++ and Fortran programmers – and providing a software path to exascale computation” The Parallel Universe #28.
  3. Bronis R. de Supinski “OpenMP is turning 20! Making parallel programming accessible to C/C++ and Fortran programmers” The Parallel Universe #29.
  4. Barbara Chapman “Welcome to the adult world, OpenMP: After 20 years, it’s more relevant than ever” The Parallel Universe #30.
  5. Michael Klemm et al. “The present and future of the OpenMP API specification: How the gold standard parallel programming language has improved with each new version” The Parallel Universe #27.
  6. Robert H. Dodds “Vectorization becomes important – again: Open source code WARP3D exemplifies renewed interest in vectorization” The Parallel Universe #29.
  7. Martyn Corden “Vectorization opportunities for improved performance with Intel AVX-512: Examples of how Intel compilers can vectorize and speed up loops” The Parallel Universe #27.
  8. Ranjan Anantharaman et al. “Julia: A high-level language for supercomputing. The Julia Project continues to break new boundaries in scientific computing” The Parallel Universe #29.
  9. Steena Monteiro and Shaojuan Zhu “Accelerating linear regression in R with Intel DAAL: Make better predictions with this highly optimized open source package” The Parallel Universe #29.
  10. Drew Schmidt “HPC with R: The basics” The Parallel Universe #28.
  11. Chao Yu and Sergey Khlystov “Building fast data compression code for cloud and edge applications: How to optimize your compression with Intel Integrated Performance Primitives” The Parallel Universe #29.
  12. Vadim Pirogov et al. “Unleash the power of big data analytics and machine learning: How Intel performance libraries make it happen” The Parallel Universe #26.
  13. Oleg Kremnyov et al. “Solving real-world machine learning problems with Intel Data Analytics Acceleration Library” The Parallel Universe #28.
  14. Oleg Kremnyov et al. “Dealing with outliers: How to find fraudulent transactions in a real-world dataset” The Parallel Universe #30.
  15. “Intel Threading Building Blocks celebrates 10 years!” The Parallel Universe, Special Edition.
  16. Vasanth Tovinkere et al. “Driving code performance with Intel Advisor Flow Graph Analyzer: Optimizing performance for an autonomous driving application” The Parallel Universe #30.
  17. Kevin O’Leary et al. “Intel Advisor Roofline Analysis: A new way to visualize performance trade-offs” The Parallel Universe #27.
  18. Brock A. Taylor “Is your cluster healthy? Must-have cluster diagnostics in Intel Cluster Checker” The Parallel Universe #30.

Henry A GabbAbout the Author

Henry A. Gabb, Senior Principal Engineer at Intel Corporation, is a longtime high-performance and parallel computing practitioner. He has published numerous articles on parallel programming, computational life science, and cheminformatics. In case you couldn’t tell from the reference list, Henry is the editor of The Parallel Universe, Intel’s quarterly magazine devoted to software innovation. He was also editor and coauthor of Developing Multithreaded Applications: A Platform Consistent Approach and was the program manager of the Intel/Microsoft Universal Parallel Computing Research Centers.

Pre-Processing GeoTIFF files and training DeepMask/SharpMask model

$
0
0

For this project, UNOSAT is responsible for providing us with satellite images. At first, we will be using GeoTIFF files of Muna refugee camps, Nigeria. You can find the map analyzed by the UN on this link. Other than these images we were also provided with shapefile and geodatabase files.

We can handle images and data within a single file using a flexible and adaptable file format known as TIFF. Using TIFF, we can define the geometry of the image by including the header tags such as size, definition, image-data arrangement and applied image compression. GeoTIFF is a type of TIFF file format. It is a public domain metadata standard which allows relation between internal coordinate system of a map with ground system of geographic coordinates (e.g. longitudes and latitude information) to be embedded within a TIFF file.

I am using shapefile to extract label information for my dataset. Shapefile is a popular geospatial vector data format for geographic information system (GIS) software. Using shapefile, we can spatially describe vector features e.g. lines, points and polygons can represent rivers, shelters or other different features.

With these things, we want to generate annotations/labels for our dataset that follows the deepmask annotation format. I have already shown deepmask annotation format in the second blog. Firstly, we need to generate tiles from the given GeoTIFF files and then we will create labels for each tile.

Tile Merging:

To generate tiles, I was provided with seven GeoTIFF files of the refugee camp. The first step was to merge these satellite images into one. To do this we can follow any of the approaches given below:

Using QGIS:

You can install QGIS by following steps given below:

sudo add-apt-repository ppa:ubuntugis/ubuntugis-unstable
sudo apt-get update
sudo apt-get install qgis

After installation, you need to follow this simple tutorial.

Using GDAL python library:

You can install GDAL by following the steps mentioned below:

sudo add-apt-repository ppa:ubuntugis/ppa && sudo apt-get update
sudo apt-get install gdal-bin

After installation, you can use one of the following commands:

Gdal_merge.py -ot Byte -o mergedMuna.tif -of GTiff Muna1.tif Muna2.tif Muna3.tif Muna4.tif Muna5.tif Muna6.tif Muna7.tif -co COMPRESS=DEFLATE -co PREDICTOR=2 -co BIGTIFF=YES

I will recommend the below approach as it produces the desired merged TIFF most efficiently.

sudo add-apt-repository ppa:ubuntugis/ppa && sudo apt-get update
sudo apt-get install gdal-bin

Python shapefile library:

The Python Shapefile Library (pyshp) provides read and write support for the Shapefile format can be installed using the following command:

pip install pyshp

After installation, to parse the given shapefile follow the steps mentioned below:

First, you have to import the library

import shapefile

For reading the shapefile create a “Reader” object and pass it the name of your existing shapefile. The shapefile format consists of three files. You can simply specify the filename with the .shp extension.

sf = shapefile.Reader("Muna_Structures.shp")

To get the list of geometry of shapefile you can use shapes() method. The shapes method returns a list of shape objects where each shape object describes the geometry of each shape record.

shapes = sf.shapes()

To find the number of shape objects in the given shapefile we can simply use len() method.

len(shapes)

To iterate through the shapefile’s geometry you can use the iterShapes() method.

len(list(sf.iterShapes()))

Each shape record can consist of following attributes:

'bbox''parts''points''shapeType'

shapeType: returns an integer which defines type of the shape.

shapes[1].shapeType

Types of shapes with their respective integer values are given below:

shapefile.NULL = 0
shapefile.POINT = 1
shapefile.POLYLINE = 3
shapefile.POLYGON = 5
shapefile.MULTIPOINT = 8
shapefile.POINTZ = 11
shapefile.POLYLINEZ = 13
shapefile.POLYGONZ = 15
shapefile.MULTIPOINTZ = 18
shapefile.POINTM = 21
shapefile.POLYLINEM = 23
shapefile.POLYGONM = 25
shapefile.MULTIPOINTM = 28
shapefile.MULTIPATCH = 31

bbox: If the shape type contains multiple points this tuple describes the lower left (x,y) coordinate and upper right corner coordinate. You can use this information to get a complete box around the points.

If shapeType == 0 then an Attribute Error is raised.

bbox = shapes[1].bbox

parts: This attribute simply group collections of points into shapes. If the shape record has more than one part then this attribute contains the index of the first point of each part. If there is only one part then a list containing 0 is returned.

shapes[1].parts

points: Returns a list containing an (x,y) coordinate for each point in the shape.

len(shapes[1].points)

You can read a single shape by calling its index using the shape() method. The index for the shape count starts from 0. So, to read the 5th shape record you would use index of 4.

s = sf.shape(4)

Simple Visualization:

To understand how shape objects in the shapefile represent shelters or other different features, you can visualize shape objects overlaid on merged TIFF in QGIS.
To do this, in the menu bar add the raster layer and then select merged image, drag and drop shapefile, it will result in all shape objects pointing at respective shelters like the images given below:

Shelters from refugee campManually Labeled Refugee camp

 

 

 

 

 

 

 

 

Tile Generation:

Now we have to split our merged image into tiles of a specific size as required by DeepMask/SharpMask model. We are generating tiles of size 224 x 224. To do this you can follow one of the following approaches:

ImageMagick method:
convert mergedImage.tif -crop 224x224 +repage +adjoin muna_%d.jpg
Sliding Window Method:

I created a python script to generate tiles by sliding window of size 224 x 224 over the merged image. I am sliding my window 50 pixels in the x direction and 50 pixels in the y direction. This results in the generation of significantly higher number of overlapping tiles as compared to ImageMagick method.

Now, I have enough number of tiles with their geometry information present in shapefile.

Problem (DeepMask annotation format requires Bounding Box):

DeepMask annotation format, as shown in second blog, does not work with sole points but requires bounding box information, so I should modify my shapefile and change points to polygons.

I can modify my shapefile using QGIS. I can create a radius of influence around each point by following the steps mentioned below:

Start by buffering your points:

Go to the menu bar

Vector -> Geoprocessing Tools -> Fixed Distance Buffer -> Set Distance

I am setting the distance to 0.00002.

This gives me the radius of influence around that point. Now I have bounding box information for each shape. The resulted polygons are shown below:

Shelters after 350 iterationsShapeFile with multiple points

 

 

 

 

 

 

 

 

I can also perform the above procedure without modifying my shapefile by creating a python script.

As in our shapefile each shape consists of just a single point and no bounding box for each shelter. I can take this single point as center and then can define my radius empirically.

We can get x and y points around any given point with an empirically defined radius by doing the following steps:

(i is in between 0 to 360)
x = math.floor(radius * math.cos(i*(math.pi/180)) + XofPoint)
y = math.floor(radius * math.sin(i*(math.pi/180)) + YofPoint)
segmentation[0].append(x)
segmentation[0].append(y)

After converting a single point to multiple points, I can now add these points in the annotation file as segmentation pixels. Further, now I have bounding box pixels available too.

Problem: (All circles will be of equal length)

Our radius of influence around all points will be of same area no matter how much a shelter structure is bigger or smaller. So, for small shelters it will include a considerable amount of background region. And for bigger shelters may be some of the region of shelter lies outside the boundaries of the radius. So, learning problems!!

As seen in the below example:

                                                       ShapeFile with multiple points

So, the next thing that I can do is to manually define the labels for each shelter. To do this you have to create your own shapefile by doing the following steps:

Choose New → new_shapefile New Shapefile Layer from the Layer menu

Select polygon as type.

Choose same CRS (coordinate reference system) as of our given shapefile. In our case it is WGS84.

Under new field, type name of new field (In my case shelters), click [Add to fields list] and then click [OK]. You will be prompted by the Save As dialog. Type the file name ("myShapeFile") and click [Save]. You will be able to see the new layer in the Layers list.

Now click on the layer name that you have just created in the Layers Panel. Click the Toggle editor Toggle editing option on the toolbar. Now our layer is editable. Now, click on the Add feature Add Feature icon. When you do this, the cursor will change from an arrowhead to a different icon. Left-click on your first shelter to create the first point of your new shape. Keep on left-clicking for each additional point you wish to include in your polygon. When you have finished adding points (i.e., the polygon represents the desired shelter area), right-click anywhere on the map area to confirm that you have finished entering the geometry of your feature. This will result in opening of the attribute window. Input the name you want for your polygon (e.g.,"shelter1"), and click [OK].

Then, click on

Save Layer Edits (either on the toolbar or under the Layer menu). Saving the project directly while you are editing your layer will not save unsaved edits and so you should explicitly save the edits in layer otherwise all efforts will be lost.

Comparison of modified shapefile with the older shapefile is given below:

ShapeFile with multiple pointsManually defined ShapeFile with multiple points

 

 

 

 

 

 

 

 

At this time, I have an appropriate shapefile and enough number of tiles. So, the next step is to create an annotation/label file for our dataset.

Annotation File Creation:

To create annotation/label file for our dataset, you can follow the steps mentioned below:

Split shape objects from shapefile in three parts appropriately: For testing, training and validation.

Load merged image using geoio library. You can do this in the following way:

mergedImage = ‘merged.tif'
img = geoio.GeoImage(mergedImage)

For training, out of 2403 shapes, I am parsing 1803 shapes. As we have generated our tiles using overlapping approach, each shape lies in several tiles.

Oru shapefile was created by using wgs84 coordinate reference system, so each shape point is represented by longitudes and latitudes. To convert longitudes and latitudes to pixels (rasterization) we can use geoio library. Since we have already loaded our merged image in geoio img object, now we can do this:

img.proj_to_raster(shapeLongitutde,shapeLatitude)

After converting geo coordinates to pixels in the shapefile, we can get our labels for each shelter using these points but these points are according to merged image. We need to localize them according to each tile.

To do this, at first, we have to find all tiles in which the given point can be present. After identifying tiles, we need to find where in the merged image lies the upper left corner or origin (0,0) for corresponding tiles.

After finding upper left corner pixels for the tiles in the merged image we can simply subtract these pixels from corresponding shape points that are present in these tiles to get the local coordinates for each shape point.

Now we can simply include these points in the segmentation key of data header in json file.
Similarly we can add bounding box information for each shape and can add this in bbox key of data header in json file.

Add appropriate information in all other keys of data header.

After each iteration we need to dump data in json object:

img.proj_to_raster(shapeLongitutde,shapeLatitude)

After all the iterations are completed we need to dump/write jsonData object in json file:

d = jsonData
d = json.loads(d)
with open('instances_train2014.json','w') as outfile:
json.dump(d,outfile)

Similarly, we have to produce json file for validation dataset.

To evaluate the validity of generated labels, I parsed my JSON file and checked for various tiles to see how my labels look.

We can do json file parsing in the following way:

import json
d = open('instances_train2014.json').read()
d = json.loads(d)
d.keys()
d["any key name"]

Two tiles with labels overlaid can be seen in the images shown below:

MATLAB labels overlayed on sheltersMATLAB labels overlayed on shelters

 

 

 

 

 

 

 

After following the directory structure as explained in the second blog we can start our training using the pretrained deepmask model with our own dataset using the command given below:

luajit train.lua -dm /path/to/pretrained/deepmask

If you want to use pretrained SharpMask model or to resume training at any checkpoint model you can do this by the following command:

luajit train.lua -reload /data/mabubakr/mlMasks/pretrained/sharpmask

This command will lead to an error of not finding a required class. To resolve this add the following line of code before line 63:

trainSm = true

After 3 days of training and more than 350 iterations with learning rate of 1e-5, I am getting the results shown in the fig.

To check results run the following command:

luajit computeProposals.lua /path/to/evaluate/own/pretrained/model

After 200 iterations these are the evaluation results that I am getting:

Shelters after 200 iterationsShelters after 200 iterations    

 

 

 

 

 

 

After 350 iterations I am getting the following evaluation results:                      

Shelters after 350 iterationsShelters after 350 iterations

 

 

 

 

 

 

 

In my opinion, DeepMask/SharpMask did not work well with satellite images. At later stages of training, evaluation results do not show any labels at all. In the future, we are going to evaluate General Adversarial Network based pix2pix algorithm.

Finally, I would like to thank my regular supervisor from CERN OpenLab Taghi Aliyev, my supervisor from UNOSAT Lars Bromley and my supervisor from crowdAI S.P. Mohanty for all of their help and guidance.

 
 
 

Intel® Developer Mesh: Editor’s Picks September 2017

$
0
0

Every month I pick out 5 projects from Intel® Developer Mesh that I find interesting and share them with you. There is a diverse array of projects on the site, so narrowing it down to just five can be difficult! I hope you’ll take a few minutes to find out why each of these projects caught my eye and then hop over to mesh to see what other projects interest you.

Apocalypse Rider

“Apocalypse” and “fun” don’t really seem like two things that should go together. But, in actuality, it’s a whole lot easier to deal with made-up apocalyptic scenarios than it is to deal with real life disasters. I suppose that’s why post-apocalyptic or dystopian themes are so popular; you can picture yourself as the hero. I know I personally love the genre, so when I saw Pedro Kayatt’sApocalypse Rider VR game I had to take a closer look. In this game you ride a motorcycle at high speeds, avoiding hostile traffic, as you make your way through 20 levels of scorched wasteland. And best of all – Pedro says you won’t get VR sickness when playing this game.  

Anomaly Detection Using General Adversarial Networks

In my opinion, the more proactive and preventative you can be in terms of your own health the better off you’ll be. With this project, Prajjwal Bhargava, hopes to obtain models that capture anomalies relevant for disease progression and treatment monitoring. By using deep convolutional generative adversarial networks (DCGAN) to learn the range of normal anatomical variability, he believes we can achieve high accuracy in anomaly detection. And using medical imaging will enable the observation of markers correlating with disease status and treatment response.

One Wheeled Balancing Robot

Some of the best videos online are of people falling over or running into things. They are hilarious to watch; and this project by Siddharth Nayak looks like it will be just as entertaining. I know I would love to watch a one-wheeled balancing robot learn to balance itself using reinforcement learning. I hope Siddharth gets video of the process to share with us so we can watch as this robot trains itself to be the best unicycle robot ever.

2D Mapping Using SONAR

I’ve seen a lot of projects lately that use camera systems to detect an objects surroundings. This project caught my eye because instead of visual cues it is using ultrasound sensors to autonomously navigate its environment. Avirup Basu uses a robot vehicle which has three ultrasound sensors on it that record data and send it to the application which processes the data to map the path traversed and essentially draw out a 2D occupancy matrix of the path followed by the robot and its environment.

VeggieBox

There are so many options on the market that will send you a box of ready-to-make meals all portioned out and ready to cook. Some would say too many (and too much packaging). What I like about VeggieBox is that it is a home appliance that can examine the ingredients you have and then suggest the best recipes for you to create a tasty, healthy, home-cooked meal. Vu Pham plans to use the Movidius Neural Compute Stick to leverage deep learning on this low-powered home device. This seems like a no-brainer as it helps you to use up your food in a healthy way and avoid more food waste.

Become a Member

Interested in getting your project featured? Join us at Intel® Developer Mesh today and become a member of our amazing community of developers.

If you want to know more about Intel® Developer Mesh or the Intel® Software Innovator Program, contact Wendy Boswell.

The Best of Modern Code | September

$
0
0

Andres Rodriguez

Intel® HPC Developer Conference: Get Enabled

Don’t miss Andres Rodriguez's talk, “Enabling the Future of Artificial Intelligence,” at the conference. It’s coming soon, so register now.


Applications for Latency

Optimize Computer Applications for Latency

For applications such as high-frequency trading (HFT), search engines and telecommunications it is essential to minimize latency. This article shows how latency can be measured and tuned in the application.


IoT in LHC

Integrate IoT Devices in the Large Hadron Collider 

Student Lamija Tupo with CERN openlab describes how she selected technology to use for the project.


Fast Simulation

Deep Learning for Fast Simulation

Learn how to create a generative adversarial networks (GANs) model to simulate the passage of a particle through matter.


Fast Computation

Fast Computation of Adler-32 Checksums

Learn how to use the vector-processing capabilities of Intel® Architecture Processors to efficiently compute the Adler-32, a common checksum used for checking data integrity.

 

How To Get Started in VR with Unreal Engine

$
0
0

Whether you are an avid game developer or curious about developing in VR for the first time, you'll want to take a look at Unreal Engine from Epic Games. Unreal* Engine is a free development platform for creating 3D applications including games and VR experiences.   What is fairly unique about Unreal Engine is a VR Mode in its editor.  This mode allows you to create and configure your VR scene from within VR.  The short video below and content in this post walk you through how to do this for the HTC* Vive*

Getting Started: The above video walks you through each of the following

  1. Download and Install Unreal Engine
  2. Launch the Unreal Editor and Select "New Project".  From there select the VR Template option. This option will add in most of features you will need automatically.
  3. The default scene points you to a choice of directories for the right levels to load.  Using the directory panel at the bottom of the default Unreal Editor layout, navigate to: "VirtualRealityBP/Maps/MotionControllerMap".  This level will give you a space to work with with all the physics, locomotion and controller features you need to start. 
  4. In the Botton Bar Select Play, then VR Preview mode to test this level in VR. You should see that your controllers look like left and right robot hands.  Using the track pad on either hand you can teleport for locomotion and you can go up to objects and grab and throw them by using the trigger.  Notice your hands will animate open and close with the pull of the triggers on the controllers.
  5. Now hit escape on your keyboard and pull up your HMD (not necessarily in that order). In the editor button menu you can select the VR Mode feature to edit this level directly from VR.  Try the following in this mode.
    1. Select the menu button on the Left Vive controllers to bring up a UI of options on your left controller
    2. Use your Right Controller to point, grab, move, resize and rotate items in the scene
    3. Using the Grip Buttons move, rotate and scale the entire level to position yourself by the collection of blue cubes. Notice the scale number allows you to scale back to 1.0 so you know the proper scale of the space.
    4. On your Left Controller Select the Windows option and then Details to open the Details panel in VR. 
    5. Use your Right Controller and point at the bottom handle bar of the Details window to move it where you'd like
    6. Using your right controller point at one of the blue cubes to select it then in the Details window and under the Static Mesh area, select Sphere to change the cube to a sphere
    7. Under the Materials menu select a different material.
    8. Do this for another set of objects and place them on the floor apart from each other.  You can use the Snap To Ground feature on your left controller to snap any object to the ground.  You can also point both controllers at an object in order to size and rotate that object
    9. Finally To test playing with your edited level point at the left Controller and select Tools then Play, While in this mode grab and roll your sphere and try to knock over or hit the other objects your placed on the floor. 

There you go. That is all you need to get started creating a VR experience in Unreal Engine.  If you are interested in continuing with Unreal, I suggest you learn about it's visual scripting system called "blueprints." It is a very powerful and highly scalable method for adding complex logic and realistic graphics to your VR experience. Learn more here.  https://docs.unrealengine.com/latest/INT/Engine/Blueprints/

Intel® HPC Developer Conference: For the HPC Practitioner

$
0
0

Are you going to SC17?

I find myself asking that question a lot lately. If you’re attending SC17, head to Denver a few days early so you won’t miss the Intel® HPC Developer Conference 2017. Even if you’re not going to SC17, the Intel HPC Developer Conference 2017 alone is worth the trip—as past attendees have told us. Registration is free, but don’t be fooled by the price. A lot of valuable information is packed into two days. Interested in the latest parallel programming models? Got it. Artificial intelligence? Got it. Achieving high performance with productivity languages? Using containers in HPC? Deploying HPC applications in the cloud? Instruction, thread, or cluster-level parallelism (though if you’re serious about HPC, you’re focused on all three)? This conference has it all. The Intel HPC Developer Conference 2017 aims to deliver practical, hands-on advice that attendees can apply to their development efforts. This was the main selection criterion for the many submissions that we received for this year’s conference. Theoretical discussion is kept to a minimum, putting greater emphasis on intermediate- and advanced-level real world results and examples of interest to experienced HPC practitioners. Here are just a few of the topics that will be covered this year:

  • Scalable deep learning and data analytics
  • Harnessing HPC with R
  • FPGA programming
  • Heterogeneous parallelism
  • Software optimization for the latest Intel hardware

There will be over 100 sessions this year, including technical lectures, hands-on tutorials, and posters. The Intel HPC Developer Conference 2017 will be at the Sheraton Denver Downtown Hotel in Denver, Colorado on November 11-12.

Register Today! 

About the Author

Henry A. Gabb, Senior Principal Engineer at Intel Corporation, is a longtime high-performance and parallel computing practitioner. He has published numerous articles on parallel programming, computational life science, and cheminformatics. Henry is the editor of The Parallel Universe, Intel’s quarterly magazine devoted to software innovation. He was also editor and coauthor of Developing Multithreaded Applications: A Platform Consistent Approach and was the program manager of the Intel/Microsoft Universal Parallel Computing Research Centers.


Trending on IoT: Our Most Popular Developer Stories for September

$
0
0

UPM Sensor Library

Discover the UPM Sensor Library's New Website

UPM, a high-level sensor library for the Intel® IoT Platform, has a new website. Explore over 400 supported sensors, along with easy-to-find code samples, sensor specifications, datasheets, and more.


Computer Vision for IoT

Python* Code Samples for Video Analytics with OpenCV

Download these computer vision code samples, which are a good starting point for developers who want to develop more robust computer vision and analytic solutions.


Intel NUC

Use the Intel® NUC and Ubuntu* to Build a Cloud-Connected Sensor Application

Learn how to install Ubuntu* on an Intel® NUC and how to build a modular IoT application with cloud connectivity.


IoT Protocols

Developing for Intel® Active Management Technology

Discover how Intel® Active Management Technology can make remote management of computers much easier.


IoT and Computer Vision

Intel® Computer Vision SDK - A Brief Overview

Get started developing computer vision applications using the Intel® Computer Vision SDK.


Terasic* Nano Board

Explore the GPIO Example Application

Learn to interact with the Terasic* DE10-Nano board's digital I/O. Walk through the process of reading from and writing to those peripherals using the Linux GPIO framework.


Adam Milton-Barker

Intel® Software Innovator Adam Milton-Barker: Using AI and IoT to Disrupt Technology

Read about Adam Milton-Barker’s journey from typewriter to launching his own projects including an intelligent assistant and a platform to teach kids about artificial intelligence.


Small Batches

Getting to Small Batches in Hardware Design Using Simulation

Learn about applying simulation to enable small batches, early adjustment, and better efficiency in hardware and system design.


IoT Journey

IoT Journey: From Prototype to Start-Up

This webinar shows how to take a real-life IoT design example from start to finish.


Visual Retail with AMT

Developing for Visual Retail Using Intel® Active Management Technology

Join Intel Evangelist Raghavendra Ural to learn the setup and implementation of a system that is remotely managed and monitored using Intel® Active Management Technology.


Intel® Developer Zone experts, Intel® Software Innovators, and Intel® Black Belt Software Developers contribute hundreds of helpful articles and blog posts every month. From code samples to how-to guides, we gather the most popular software developer stories in one place each month so you don’t miss a thing.  Miss last month?  Read it here. 

Intel IoT

AI Developers and Students ‘Go Big’ at Intel® Nervana™ AI DevJam Event

$
0
0

Today's brightest AI innovators share knowledge and insights at Intel AI DevJam and Student Ambassador Forum

AI developers, data scientists and students assembled for an evening of networking and training in AI solutions powered by Intel® technologies at the latest Intel® Nervana AI DevJam and Student Ambassador Forum.  Both events were held September 18 at The Village in San Francisco in conjunction with the O’Reilly AI Conference.  

Hosted by Intel® Nervana™ AI Academy, this event helped AI developers and students increase their knowledge and learn how to put machine learning to use quickly, efficiently, and cost-effectively on Intel® architecture.

Some 500 AI-focused attendees took part in the events, sharpening their machine learning skills as they engaged with AI experts from Intel as well as external experts who are part of Intel’s Innovator and Student Ambassador programs.

 

 

 

Demos Feature What’s New in AI

The DevJam and the Student Ambassador Forum showcased technical AI demos from Intel as well as from Intel® Software Innovators and Intel® Student Ambassadors.   Activities were dedicated to highlighting notable new developments and star innovators in AI, as well as tools, frameworks, resources, and training available from Intel.

Demos at the Intel Nervana AI DevJam event included:

  • Intel® Nervana™ AI Academy– Showcasing the AI Academy, this demo illustrates its offering including training through online tutorials, webinars, student kits; development opportunities through exclusive access to the Intel® Nervana™ DevCloud; teaching materials through coursework and exercises; and collaboration opportunities through Intel® Developer Mesh and events.
  • BigDL: Deep Learning on Apache Spark*– This distributed deep learning framework provides rich deep learning support for better scale, higher resource utilization, and improved TCO, among other benefits.
  •  Vehicle Rear Vision with Movidius*– Intel® Software Innovator Peter Ma   demonstrated an add-on camera for vehicles that combines radar and AI technology including the Movidius Neural Compute Stick to achieve safer backup driving.
  • Deep Learning for Power Line Fault Detection– This showcased how Intel® architecture and Intel-optimized Tensorflow* are used for object recognition, enabling drones to capture footage of power lines and recognize faults.  Power lines, pylons, and other objects are identified and classified using Tensorflow.
  • Multi-Stick on Raspberry Pi - Developers may wonder whether it is possible to plug multiple Neural Compute Sticks (NCSs) into a hub and run them all simultaneously to further speed-up a neural network that’s processing successive frames of video.  The answer is ‘yes’!  This demo showcased four NCSs in a hub, operating a neural network recognizing objects as the four sticks are successively processing images to recognize the objects in those images.  It demonstrates the scalability when prototyping neural networks using NCS.
  • Visual Knowledge Graph – This demo provided attendees with a fun and positive view of the work Vilynx is doing with the Visual Knowledge Graph and how it will be available for the machine and deep learning community as a training set.
  • AI for Good – Many developers and companies are exploring opportunities to develop AI applications for good. This demo showcased how Intel is working with Thorn to try and prevent child human trafficking through image searching optimization. See an example of facial recognition technology at work.
  • Tensorflow* Optimizations on Intel® Architecture - TensorFlow is an open-source software library for numerical computation using data flow graphs. Intel and Google engineers are working together to optimize TensorFlow as a flexible AI framework for Intel® Xeon® and Intel® Xeon Phi™ processors.

Keynote and Panels Provide Insight on AI Industry

A highlight of the DevJam event was the keynote Q&A delivered by Naveen Rao, Intel Corporate Vice President and General Manager, Artificial Intelligence Products Group.

Preceding Rao's keynote was a student panel moderated by Scott Apeland, Director of the Intel®  Developer Programs. During this segment, student participants discussed  AI curriculum and what their instructors believe is shaping the future of AI. Panelists included Panuwat Janwattanapong, Florida International University; Nikhil Murthy, Massachusetts Institute of Technology; Andy Rosales Elias, UC Santa Barbara; Pallab Paul, Rutgers University; and Srivignessh Pacham Sri Srinivasan, Arizona State University.

Apeland also moderated a fireside chat and brief Q&A session with Intel experts discussing  “The Future of Intel and AI.”  Panelists included Julie Choi, Director of AI Products Group Marketing; and Hanlin Tang, Staff Algorithms Engineer.

 

 

Student Ambassadors Showcase Their Work

During the Student Ambassador Forum, student presenters discussed their research and   projects and shared their experiences in the AI space.  Student ambassadors also had the opportunity to demonstrate their projects at DevJam:

Face It - AI Hairstyle RecommenderIntel Student Ambassador Pallab Paul showed how AI technology can scan a customer’s face to determine overall features and shape.  Customers can select a style appropriate to their hair and lifestyle.  The technology then offers various hairstyle recommendations and hair-related tips. 

Deep Auto-Encoders for Network Intrusion Detection– To detect anomalous behavior without outside supervision, Intel Student Ambassador Srivignessh Pacham Sri Srinivasan demonstrated how a network learns the stable state of its environment online using deep auto-encoders.

Classify Images in Apache Spark*– This demo showcased the use of deep learning in Spark* to provide a framework to classify images of food as vegetarian or non-vegetarian.

By participating in Intel® Nervana™ AI Academy, developers, data scientists, students, professors and startups can access the latest tools, optimized frameworks, training for artificial intelligence, machine learning, and deep learning.  If you are new to AI, the Academy offers ideas and guidance on where to begin.

For more AI-related news from O’Reilly Conference, checkout Data Center Group Vice President and General Manager Lisa Spellman’s announcement on Intel® Nervana™ DevCloud a cloud-hosted hardware and software platform for developers, data scientists, researchers, academics and startups to learn, sandbox and accelerate development of AI solutions with free compute cloud access powered by Intel® Xeon® Scalable processors

Intel Accelerates Accessibility to AI with Developer Cloud Computing Resources

Follow other Intel® Nervana™ AI Academy happenings at @IntelSoftware

MeshCentral2 - Load Balancer & Peering Support

$
0
0

MeshCentral2 is a free open source web-based remote computer management solution allowing administrators to setup new servers in minutes and start remotely controlling computers using both software agent and Intel® AMT. The server works both in a LAN environment and over the Internet in a WAN setup. Now, I just released a new version with support for server-to-server peering allowing for improved fail-over robustness and scaling. Some technical details:

  • Servers connect to each-other using secure web sockets on port 443. This is just like browsers and Mesh agents, so you can setup a fully working peered server installation with only port 443 being open.
  • Server peering and mesh agent connections use a secondary authentication certificate allowing the server HTTPS public certificate (presented to browser) to be changed. This allows MeshCentral2 peer servers to be setup with different HTTPS certificates. As a result, MeshCentral2 can be setup in a multi-geo configuration.
  • All of the peering is real-time. As servers peer together and devices connect to the servers, users see a real-time view on the web page of what devices are available for management. No page refresh required.
  • MeshCentral2 supports TLS-offload hardware for all connections including Intel® AMT CIRA even when peering. So, MeshCentral2 servers can benefit from the added scaling of TLS offload accelerators.
  • Fully support server peering for Browsers, Mesh Agents and Intel® AMT connections.
  • The server peering system does not use the database at all to exchange state data. This boosts the efficiency of the servers because the database is only used for long term data storage, not real time state.
  • There is no limit to how many servers you can peer, however I currently only tested a two server configuration.

Note that MeshCentral2 is still in beta and not yet suitable for production use. If you want to try the new server, check out our main MeshCentral2 web site and our NodeJS NPM portal.

Enjoy!
Ylian Saint-Hilaire
http://meshcommander.com/meshcentral2

 

MeshCentral2 now fully supports server peering. You can now setup two or more servers and
split the MeshAgent/Browser/Intel® AMT connections between the servers.

Because of the new peering design, new connection protocols and authentication architecture,
MeshCentral2 can support a wider range of configurations and fully support TLS accelerators.

Art’Em - Week 2

$
0
0

Art’Em is an application that hopes to bring artistic style transfer to virtual reality. It aims to increment the stylization speed by using low precision networks. For this Early Innovation Project, I hope to use low precision networks to replace the underlying multiplications with additions and Exclusive-NOR (XNOR) bitwise operations.

It is clear in this image, that the operations in the above matrix multiplication can be replaced with bitwise XNOR population count operations. Here, pcnt is population count.

Before we get our hands dirty with some conceptual jargon, let us understand what the aim here is.

What is Artistic style transfer?


(https://goo.gl/6T67VY)

You may have seen this on the Prisma* app, or on youtube. Artistic style transfer essentially transforms an input image in the ‘styling’ of any other reference image provided. Here, the input image is the content image, and the reference image is the style image.

While this is all easy to talk about, how do you define the ‘style’ of an image, and the ‘content’ of the image? And how do you selectively extract the style from an image? To answer this, one must understand what a convolutional network is.

A convolutional neural network (CNN) is a type of deep neural network where a convolution operation has been adapted from vision processing in animals. I won’t bore you with the details of CNN's, but one must know that these networks have neurons that respond to specific image features. These features can best be imagined by the following image:

(http://www.iro.umontreal.ca/~bengioy/talks/DL-Tutorial-NIPS2015.pdf)

As you can see, each box in the image has specific features that it responds to maximally. The deeper you go into the network, the more abstract the classifier features get.

This is key to understanding what is happening in style transfer. We now know that as we go deeper into a network, each layer responds to more abstract features of the image. One has to find the right balance between what level of features are to be extracted from the style image and what to retain from the content image.

Here, c is the content image, s is the style image, and x is the image undergoing transformation.

This is now an optimization problem, where F has to be minimized. The alpha, beta and gamma are the weights that are attached to the content loss, style loss and the total variational loss. I have not talked about total variational loss, however this is simply a function that acts as a regularizer, and keeps the overall stylized image smooth.

To calculate Lcontent and Lstyle, we simply choose layers from a pretrained VGG 16 network, which we should compare between (c,x) and (s,x). The norm is to use an optimizer like Adam optimizer or Limited-memory Boyden-Fletcher-Goldfarb-Shanno (L_BFGS).

The fastest stylization rate I have seen is approximately 1 frame per second on a Nvidia* TITAN Xp graphics card. This is too slow to be deemed real time, even though the procedure itself is highly complicated.

Now that you know what artistic style transfer is, let me explain what I am trying to do.

Binarizing the Visual Geometry Group (VGG) 16

The algorithm behind binarization has been taken from this paper. And is simply the following algorithm:

This was applied to the VGG16 network. Alk is simply the average of the weight matrix’s absolute valued sum.

Of Course, binarization is an incredibly destructive process, which causes the entire results of the binarized VGG 16 to decrease in accuracy greatly. Currently, better methods like loss-aware binarization are being explored.

I ran a function on Intel® Xeon Phi™ cluster, which helped me visualize the maximum activation of each layer in the network before and after the vanilla binarization algorithm written above.

As you can see in this image, the binarized network has greatly reduced its feature complexity. There is a lot of loss in the quality of filters, and one cannot hope to extract high level features with the binarized network.

Trusting good literature and the plunger dilemma

While the results of binarization are not too great, I am confident that after training the binarized network, we will move past feature extraction onto parallelization.

I ran the binarized network for classification task, and it classified everything as a plunger, cats and dogs alike. This may seem discouraging, but one must remember that the VGG 16 classification model and VGG 16 no_top model are miles apart in complexity. I am going to put my faith in the several papers I have read, and train the binarized network further to improve its feature detection. Low precision networks on classification tasks have performed close to state-of-the-art algorithms.

Speeding things up

Typically, 32 bit floating point multiplications are used in most neural networks. However, these are very expensive operations. In binary neural networks (BNNs), these multiplications can be replaced with bitwise XNORs, and left and right bit shifts. This is extremely feasible, provided the accuracy of the network isn’t compromised by too much. This article further stresses on how BNNs operate.

My plan is to create a simple CUDA kernel implementation of convolution using XNOR dot product. While the bit packing operation as mentioned in the article adds computation, the forward propagation throughput should theoretically be 32 times faster than an un-optimized CUDA baseline kernel for GEMM.

Since artistic style transfer requires backpropagation to change the stylization image, it is extremely important to exploit this technique in backpropagation as well as l_bfgs/Adam optimizer too.

Backpropagation:

Notation:

  1. l is the lth layer.
  2. Input x is of dimension H×W and has i by j as the iterators
  3. Filter or kernel: w with m and n as iterators of dimensions.
  4. f is the activation function.

Backpropagating through the network with the above operation can also be parallelized, as is simply a convolution operation. To read more on convolution this is the best reference I can provide.

Wrapping things up

While the network isn’t stylizing particularly well when compared to the full precision net, that is expected on an untrained binarized network. The current focus of this endeavor should be to set up the XNOR forward and backward propagation, while training the binarized network on Intel® Xeon Phi™ cluster. After the CUDA implementation of convolution has been integrated with a framework, I will begin to test its effectiveness on a Graphics Processing Unit (GPU).

Many techniques are still in the pipeline. Super resolution, down-sampling, better loss algorithms (perceptual loss etc.), better optimizers, perhaps smaller networks too.

I am also planning to divide the input image into smaller segments and parallelize stylization over GPUs, with a better loss function.

Perhaps for calculating content loss, the primary network can run on the GPU, while the content throughput can be redirected to the Vision Processing Unit: Modivius™ Neural Compute Sstick. This will allow for even faster style transfer!

Its potential extends beyond just style transfer. Due to the reduced network sizes as well as the fast throughput, one can implement these networks in low powered devices if correctly optimized.

I am extremely optimistic about this project. Let's bring style transfer to VR!

The Intel® Software Distribution Hub Fuels Enthusiast PC Sales with Amazing PC Game Bundles

$
0
0

Software Distribution Hub Gamer Girl Cover

Best known as the inventor of the first mass-produced integrated circuits, memory chips, and microprocessors, Intel also delivers business innovation through market-driving programs such as Intel® inside.  On Sept. 19th, we announced our latest business innovation, the Intel® Software Distribution Hub (Intel® SDH). 

Intel SDH is a wholesale business-to-business (B2B) software marketplace. It allows our hardware partners to create unique bundles and promotions, and it helps our software partners reach more customers and sell through our hardware channel. The end result is growth for our software and hardware partners, and more demand for higher-end Intel® silicon.

In conceiving the Intel SDH, we saw an opportunity to meet unfilled needs for software and hardware partners: Independent software vendors (ISVs) want volume sales revenue from new and existing catalogues and access to more consumers; PC manufacturers (OEMs) and resellers want to differentiate their offerings by accessing premium titles at a low-cost, protecting average selling prices while offering value-added sales incentives. Up until now, there was no single solution to meet all of those needs. We saw an opportunity to partner with a leading game distributor to build and manage a global B2B platform.

The Intel SDH is powered by our trusted partner, Green Man Gaming. They have tremendous experience in secure payments, game key inventory management, and distribution to authorized Intel partners and their customers. ISVs and publishers sell titles and Green Man Gaming distributes the codes to channel partners, retailers and OEMs who order and bundle the game titles. 

“We enhance the experience of our gamers on a direct community relationship level, but also from a business-to-business perspective,” said Green Man Gaming Founder & CEO Paul Sulyok. “Our Intel relationship has grown very strong, from efforts where Intel helped promote developers in the marketplace to where we work together to ease friction, looking at B2B opportunities with hardware manufacturers and resellers. That relationship is built on a common vision for where we’d like to go with our customers.”

Intel Software Distribution Hub Welcome

Intel SDH is a B2B self-service software marketplace and promotional engine.

Intel’s PC channel partners, retailers and OEMs can now create multiple custom bundles for Intel-based PC promotional campaigns. They can utilize the Intel SDH site to ignite sales throughout the year. Eligible partners get access to a wide variety of game titles and price points without high volume commitments, and they can offer users popular game titles without managing multiple ISV/publisher account relationships.

“The Intel® Software Distribution Hub provides a great benefit for our Intel® Technology Provider Gold and Platinum Partners. It offers them the unique opportunity to differentiate themselves by providing a wide range of popular game titles in addition to delivering the excellent performance and exceptional experiences our gaming customers demand,” said Ricardo Moreno, VP, Intel Sales and Marketing Group.

For game developers, the Intel SDH is a private, wholesale-only B2B software marketplace that helps generate new sales and exposure through a global network of authorized Intel PC resellers and retailers. It’s a new, controlled sales channel that gives ISVs and publishers the opportunity to sell game licenses to hundreds of Intel partners worldwide, who want to bundle exciting titles with their gaming systems.

ISVs and publishers can use the bundled promotions to reach target gamers while also scaling-up their consumer base. The Intel SDH offers a new way for ISVs and publishers to deepen the exposure of their titles. Their marketing assets will become available to members, after purchase, for use in sales campaigns around the world, giving the titles increased impressions and extended visibility.

The site is flexible, simple, and free to use.  It’s fully supported by Green Man Gaming’s customer support team, seven days a week, and there is no cost to the ISV or publisher.

If you’re a developer of PC games, and you’d like to learn more about getting involved with the Intel Software Distribution Hub, contact your Account Manager and/or visit the site at https://isdh.greenmangaming.com.

#  #  #

* Intel and the Intel logo are trademarks or registered trademarks of Intel Corporation.  Other names may be claimed as the property of others. © Intel Corporation, 2017.  

 

Viewing all 1751 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>