Quantcast
Channel: Intel Developer Zone Blogs
Viewing all 1751 articles
Browse latest View live

AI Student Ambassador Alfred Ongere: Using AI to Improve Lives

$
0
0

The Intel® Student Developer Program was created to work collaboratively with students at innovative schools and universities doing great work in the Machine Learning and Artificial Intelligence space. I had the opportunity to get to know Intel® Student Ambassador Alfred Ongere and learn about how he became interested in artificial intelligence and his passion to be a positive influence for others.

Tell us about your background.

I am very passionate about impact and I consider myself to be an Impact Chaser. That's because I pursue opportunities, activities, and ventures that impact other people's lives positively. Since March 2016 I have served as an Intel® student partner and then Intel® Student Ambassador for Multimedia University where I have been studying for my degree in Telecommunications and Information Engineering. I am also passionate about entrepreneurship, particularly social entrepreneurship, which to me means paying attention to the people or end customers you are serving. It is about being customer and impact centric.

I attended the 2017 Advanced Trepcamp High-Impact Entrepreneurship program at Boston University where I interacted with clusters dealing with robotics, virtual reality, Internet of Things (IoT), education and artificial intelligence (AI) technologies. I am also a Windows* Insiders 4 Good East African Fellow courtesy of my startup, Sahibu, a project that aims to improve refugees' lives through opportunities, life-saving information, and alerts on their mobile phones. I am passionate about sharing knowledge and experiences with people and around me, and I always enjoy every session that I am part of under Intel's programs.

What got you started in technology?

Ever since I was a kid, I always had a curiosity about how things, especially machines, worked. That curiosity grew into wanting to fix things when I saw a problem. The first thing I remember fixing was a broken toy car. Eventually, learning engineering and consequently, technology provided the best means of fixing and improving things.

What projects are you working on now?

I have quite a number I should be working on, but currently I'm focusing on improving my IoT project that measures and transmits level of fill in regularly shaped containers. I want to add some data science and AI to it so that I can predict future values of levels of fill based on the previous data collected. I will employ a linear regression model on the data collected from previous tests. Through the future predictions, such a solution will help in cutting down costs based on the current readings as compared to the trained data. 

I am also working on a project that aims to prove whether there is a correlation between someone's facial features and their voice. The goal is to have a well-trained model that would predict your voice or face, based on either input that is provided to it. Once the project has matured, it can be used to describe people's faces to blind people based only on the voices they hear, this could come in handy especially in emergencies like blackouts. I hope to use the Intel® Movidius™ Neural Compute Stick (NCS) for inferencing once I am done training and designing the model because my project would benefit from using a dedicated low power consumption device.

Tell us about a technology challenge you’ve had to overcome in a project?

Of late, I've been encountering so many cool case study projects that I want to share with other developers during my events, and it's always hard to decide which one will best suit their needs technical wise. I always have to strike a balance between the two, so that I also don't leave out the beginners who come to the events for discovery. Now that the program is growing, I’m learning that I need to have more events tailored to the audience type so that each can benefit and learn in equal proportion. The more advanced audience members have different needs that need to be addressed in a different way.

What trends do you see happening in technology in the near future?

I see AI as a Service (AIaaS) thriving in the near future, in almost all aspects of the industry. With the increasingly big amounts of data that we keep producing every day, companies with the best AI, Machine Learning (ML) and Deep Learning (DL) services will be in higher demand as this space keeps advancing on a day to day basis. Microsoft recently announced that they will be introducing AI package solutions in Office 365. This means that a normal user would leverage AI solutions from Microsoft on their excel spreadsheets and documents. Another example would be Data Robot that automates ML for users who are not data scientists. A user can upload an excel document containing structured information, and the site will give guidance on the algorithm to use, based on what the user aims to achieve. The site simplifies the ML problem-solving approach for non-data scientists and business analysts. Putting such a solution on the cloud, and consequently as a service makes it more accessible, and simple to use for everyone. More of such solutions will take shape in future as AI becomes a field that is widely researched.

How are you planning to leverage Artificial Intelligence or Deep Learning technologies in your work?

Of late, I've found that most of the solutions I keep coming up with are inclined towards assisting people to make the right decisions. I see myself leveraging on AI in making the best decision support tools for the end users. I hope to maximize on optimized libraries for Intel Infrastructure and particularly the Intel® Movidius™ NCS in applications that require high compute and low power requirements. 

What are you looking forward to doing with Intel as a Student Ambassador?

I look forward to growing the tech community in my country, and constantly providing new ideas to improve the ambassador program. I also hope to connect and educate more upcoming developers on all Intel resources that can help advance their technology projects. I also look forward to meeting the great minds that are behind the Intel technologies that we enjoy in our daily lives.  

How can Intel help students like you succeed?

By providing more technology resources that students can experiment and grow from. I applaud Intel for launching the Intel® Nervana™ AI Academy and making it accessible worldwide. 

What impact on the world do you see AI having? And do you see yourself as part of it?

I see AI helping us make better decisions in life and I see myself as one of the frontrunners pursuing such solutions to improve people's lives. I'll give a personal example that I go through every day, Samsung* Health. My Samsung* Galaxy Edge phone is able to count how many steps I take daily. Using pedometer sensors, the phone is independently able to count how many steps I take daily as long as I'm carrying it in my pocket or holding it in my hand while walking. I can also set up challenges with my friends, or join global challenges with other users, where we compete to see who hits the highest amount of steps in a given time. In a bid to increase my daily steps while competing with close friends in a challenge, I have developed a tendency to take the stairs every day at work. The app has influenced me to avoid using lifts, since I need to keep fit and increase my daily steps at the same time. AI, the step classifier in this case, has influenced my decision of using the lifts, and I can walk up seven floors daily because I've gotten used to it, thanks to the app.

In the case of my startup Sahibu, I aim to improve and empower refugees using information. Having information influences the decisions they make in their daily lives. I aim to create and leverage on text summarization AI to collect and summarize important information and deliver it urgently to refugees on their phones. I also plan to create or leverage on language classification and translation AI to address language barrier problems for refugees.

Want to learn more? Check out our Student Developer Zone, join Intel® Developer Mesh, or learn more about becoming a Student Ambassador

Interested in more information? Contact Niven Singh


AI Student Ambassador Bruna Pearson: Deep Learning, Robots, and Drones Oh My!

$
0
0

The Intel® Student Developer Program was created to work collaboratively with students at innovative schools and universities doing great work in the Machine Learning and Artificial Intelligence space. I had the opportunity to get to know Intel® Student Ambassador Bruna Pearson and learn about how she got interested in deep learning, robots, and drones.

What got you started in technology?

I was born and grew up in Brazil. One day, my father brought home an outdated TK-85, which was a clone of Sinclairs' ZX81. Until that point I had never even seen a computer. It is fair to say that it was love at first sight! The books we had about programming were all in English, and so I used a dictionary to translate them to Portuguese, and soon learned how to code in BASIC. I’ve not stopped using computers since.

Robotics came much later, when I was at university in Durham, and a pair of Pioneer-3ATs was brought to our computer vision lab.  I had played with Lego and Raspberry Pi powered robots before, but the P3-AT was different: it was a tool, rather than a toy, and it was then that it became clear that robotics was what I wanted to do.

Tell us about your background.

I completed my masters in computer science at Durham University, where I had the opportunity to investigate 3D scene mapping and navigation using multi collaborative robots. This was an extension of my third-year project, in which I investigated how to autonomously navigate in unstructured environments, both indoors and outdoors. In this project we applied visual saliency to the input image retrieved from a single camera. Using image segmentation we were able to distinguish between paths and non-paths, so that the robot could learn where it was safe to navigate and could make decisions about how far and how fast it could go. 

Currently, I am reading a PhD, also at Durham, where I am funded by the Engineering and Physical Sciences Research Council (EPSRC). I am looking at how to fly autonomously in unstructured environments, such as under a forest canopy, while simultaneously producing a 3D image of the environment. This project involves the use of a low-cost Unmanned Aerial Vehicle (UAV) to autonomously retrieve imagery data, before performing a procedure called Structure from Motion (SfM) in order to produce a 3D map of the surveyed area.

What was the inspiration for you to get started with 3D mapping and navigation?

During my first summer internship, I started investigating saliency maps and how to apply them to pathway detection problems. In the beginning it was very painful because absolutely nothing was working. But slowly and with a lot of support from my colleagues, I started to get the hang of MATLAB and C++ and everything started to work. I really enjoyed that experience and I wanted to do more, so instead of doing my third year dissertation in Image Processing I asked to change to Computer Vision. This change allowed me to expand my research into Robotics and the exploration of both navigation and 3D mapping was suggested by my supervisor.

What projects are you working on now?

I am using Deep Learning to simulate humans' perceptions of forest trails with the intention of giving a UAV the ability to identify a trail and to make an autonomous decision about the best flight strategy to be executed in real time. The applications of this project vary from Search and Rescue (SaR) missions to assessments of forest structures and riverscapes and even to farming, where UAVs are being used to assist crop monitoring and to spot bacterial or fungal infections on trees, for example.

What Intel technologies are you currently using, or plan to use?

Currently I am investigating the use of the Intel® Movidius™ Neural Compute Stick to identify trails in unstructured environments. In addition to that my plan is to use Intel’s ready to fly drone to perform data gathering and test autonomous flights. 

How are you planning to leverage Artificial Intelligence or Deep Learning technologies in your work?

I believe that in order for a UAV to be able to make similar decisions to those that a person would while navigating in unknown and unstructured environments, more is required than just freezing or adding extra layers into a DNN. I believe we need to re-think our approach to development and simplify the process of creating models. This is what I am currently working on.

What is a technology challenge you’ve had to overcome in a project?

In the current project, our main challenge is data gathering. We are interested in data that can provide enough information to aid generalization to different domain targets and also to allow more accurate obstacle avoidance behavior. In order to achieve this we need to train Deep Neural Networks, which by nature require a lot of data for classification. In addition, this data needs to be significantly varied and, to some extent, sequential. That leaves us with the option of gathering either synthetic or real-world data to train our model. Although gathering synthetic data from games or simulators seams at first a convenient option, it can result in models that don’t perform well in real-life scenarios. So, in reality we are trying to achieve the best level of accuracy possible using smaller samples of both real and synthetic data.

Tell us about your experience as a woman working in technology.

The women in my family have a saying, Somewhere down the line of my family tree, there is a popular saying that has been emphasized, which says that a woman should be like a bamboo tree, because although a strong wind can bend it, it is still very hard to break it. This does not mean we have to be invincible or harsh, just that we should not give up easily. Adversities like the strong wind will always surround us, wherever we go. It is up to us to keep going.

I have met so many strong woman in technology that do just that – they keep going, like Dr. Sarah Drummond, who, like many other women, raised her children while doing her PhD. We have lots of other brilliant female PhD students that are doing just that and they inspire me every day, like Dr. Hannah Dee who created the Lovelace Colloquium because she thought it would be a good idea to have a space where girls could present their work and talk about technology. To be honest I think she had an amazing idea and the Colloquium has just celebrated 10 years of encouraging and supporting woman in technology. When I was an undergraduate I was encouraged to present my work at the Colloquium and am now one of its greatest fans.

The Colloquium has helped me to see how important is to encourage more girls to have a go at computing.  Because of this, in 2015, I and other computer science (CS) students created a Facebook group called "Women in CS – Durham", which connects former CS students with fresher students. It is also a space where we share our achievements, job opportunities, additional support and training tailored for girls. We also want to show that you don't need to be a major in Computer Sciences to like technology, so together with Dr. Ibad Kureshi and others we support the Code First Girls initiative, where we work as volunteers aiming to introduce and train girls to code and build web applications. I also work as a Science, Technology, Engineering, and Math (STEM) ambassador to support schools delivering technology-related content, but above all to talk with boys and girls about what can be achieved if you just dare to try. 

I think it is important to say that my personal experience as a woman in technology has been shaped not just by the amazing female models that I have had, but also by the support and encouragement that I have received from male colleagues, supervisors, lectures – and the list, I am happy to say, is not small. For that, I am thankful. 

What are you looking forward to doing with Intel as part of the Student Ambassador program?

This is a very exciting opportunity for me. I am learning a lot and this is reflecting positively on my research. I have space to share my work and interact with other developers through the Intel® Developer Mesh community, not to mention access to additional servers where I can speed up and increase the amount of training and testing required to fine tune our Deep Neural Network. I am looking forward to sharing this opportunity with other students at Durham, because I am sure it will also have a positive impact on their academic development. I think is safe to say that good research tends to have a positive impact on society in general.

How can Intel help students like you succeed?

By somehow expanding the program to incorporate secondary school students that are learning about computing. In a few years, some of them will be at the University doing what I am doing now and if they already have a basic understanding of how artificial intelligence works, they will make better decisions about how to direct their education and research. I think starting to teach about AI early is key to getting more students into technology based studies. This could take place during one- or two-week summer camps where students could learn how to code and also develop AI-related projects with support from current student ambassadors, for example.

What impact on the world do you see AI having? And do you see yourself as part of it?

I think one of the great things about AI is that it will improve the quality of life of those who need it most. Take, for example, the driverless car: it won’t just reduce road accidents or take us from A to B; better than that, it will connect people who now find difficult to take public transport or to do simple things like shopping or visiting friends and family. There will be no waiting for taxis, trains or buses. I think that in the future most of us won’t feel the need for a car either; it will be more practical to have a monthly subscription that allows us to choose which car we want at our door when we need it. We could have intelligent cars that identify the level of stress of the passenger and automatically adjust light and sound conditions to recreate a more relaxing environment while driving back home. My hope is to be amongst the ones that will be driving this technology forward.

What trends do you see happening in technology in the near future?

I think we won’t be using so many handheld devices like tablets, mobile phones or smart watches in the future. Instead, our smart sunglasses/contact lenses will display all the health related information that is gathered by our smart clothes while we are exercising. Similarly, calls, emails and even our favorite TV shows will be displayed on the go. Food waste will be reduced since our smart kitchens will be buying the groceries for us and no need to wait for the delivery service either; our autonomous car will pick up the shopping before picking us up from work. If robots are cooking our dinner, there is no need for fast food or take way food either, so we should eat healthily as well.

Personal computers will be unnecessary; we will be working from customized working stations placed at public places. It will make sense to have a practical life where creativity is highly prized and I think it is much easier for someone to have an eureka moment when they are having a coffee at a lovely and cozy coffee shop or on a bench by the sea, than it is when they are in a closed cubical because they need to power up their laptop to type notes.

Outside of technology, what type of hobbies do you enjoy?

Durham is a fantastic place for outdoor activities. It is part of the local culture and I really love it, so I usually alternate between hobbies depending on the weather. I enjoy hiking and swimming during spring and summer, but during autumn and winter I prefer archery or going for long walks. Independent of the season I enjoy reading, cooking and cycling. I am also slowly expanding into photography and have been enjoying gardening lately too.

Want to learn more? Check out our Student Developer Zone, join Intel® Developer Mesh, or learn more about becoming a Student Ambassador

Interested in more information? Contact Niven Singh

Portland VR Meetup, October 2017: Immersive Gaming with AR/VR

$
0
0

The Portland Virtual Reality Meetup is a community of Virtual Reality (VR) developers, entrepreneurs, engineers, artists, enthusiasts and early adopters and the topic for the October meetup was the future of Immersive Gaming with Augmented Reality (AR) and VR. The evening started with time to mingle, network, and play with some VR demos including a couple of first person shooter games, a light saber game, and the opportunity to check out a real relic of the technology – an original Virtual Boy from 1995. All of this was capped off with fantastic views over Portland from “Big Pink”, the second tallest building in the city.

There were three great speakers who focused on different aspects of immersive gaming with VR. The first speaker, Damon Pidhajecky, is the lead engineer on Headmaster, a PlayStation* VR (PSVR) launch title where the aim of the game is to head butt a soccer ball at different targets. He spoke about the process of developing a game for the PSVR platform, some challenges they overcame, and some tips for developers.

Damon explained about some of the challenges and limitations they had to work with in order to design for not only PlayStation, but for VR too, and how they used many of them to their advantage. For example, they had a limited number of light points so they needed to focus the player’s direction in a certain area. In order to do this, they set the game in a dark, prison environment. Another challenge was that you could only use your hands for menu navigation and not for the game itself – so they chose head-butting a soccer ball because you don’t use your hands in soccer.

For a truly immersive experience, Damon recommended that you pay special attention to the sound in your VR. You want 3D audio that not only captures what you are doing but everything that is going on around you as well. Another great tip was on testing your game – you have two kinds of testers, first time VR users who think everything is cool, and experienced VR users that can give you more feedback like in-game navigation, VR sickness issues, and more – and it’s good to have both. It is also important to have people do long term testing on your game, people who actually go through all of the levels, which you can’t do in a five minute demo at a tradeshow.

Headmaster was lucky to get onto the PSVR demo disc and be part of the Best Buy demo, but that added a lot of stress to the development of the game because they had to deliver the demo six months before the game was finished. They quickly learned that they had to scrap some features such as locomotion which was causing VR sickness, rather than to work to make that feature better. Another great tip that Damon had was that when promoting your game, most people watching the trailer will not be using VR, so you should use 2D video clips which offer a more cinematic and simulated effect. He said the process Sony uses to make sure your game passes all of their requirements is also helpful when porting the game to other platforms because you’ve pretty much already optimized everything and all that’s left to do is working through different platform’s badges and awards, etc.

Next on stage was Nima Zeighami, founder of Agency XR consulting firm and VR Sports, an independent VR studio. Nima talked about how sports and games have evolved and how XR is merging them together. He walked us through a brief history of both sports and games bringing us through the ages up to today and the launch of VR video games. He explained how vSports are a new category of play, distinct from eSports because not only is motion a key part but they are also competitive. They are also distinct from sports because your stature, size, and strength are not important but your speed is. vSports are inherently competitive and have really begun taking off due to VR tournaments being held around the world.

“People love video games because they do things they obviously can’t do in real life. That’s especially true with sports games because fans love to step into the shoes of their favorite athletes.” ~Ralph Baer, aka the Father of Video games

Some of the most popular vSports right now include the Virzoom Bike, which can be found in some arcades and gyms and make exercise more fun and playful; Echo Arena which held the final competition of a huge tournament at Oculus Connect and is the kind of game that is validating vSports because of how popular it is; and Tower Tag which is like a version of tag for the future and includes a haptic tower that you can use to push yourself off of, lean on, and more which really amps up the feeling of really being in the game.

Because of games like Tower Tag, which has an actual tower component bolted into the ground and needs a designated space, Nima predicts we will begin to see VR environments that we can go to and play at. The physical aspect of the experience makes it more immersive, so do other things like having hot air blow over you like wind when you’re VR environment is set in the desert.

The final talk of the evening was by Brent Insko, the lead software architect in the Virtual Reality group at Intel, who is working to ensure Intel’s next generation platforms are ready for where VR heads in the future. Brent also heads up Intel’s efforts with the Khronos Group around VR with OpenXR. Brent talked about what has been holding VR back: cost, comfort, and content.

Cost: For a good experience, you need high end equipment that can handle the resolution and processing power. Although we are starting to see some of the Head Mounted Displays (HMDs) come down in price.

Comfort: The devices go on your head and get sticky and sweaty and you have cables tethering you to the equipment, not the most comfortable. Inside-out tracking and wireless backpack systems are helping to address these issues but they have a ways to go yet.

Content: Everyone is developing their own standards. There are three main PC VR verticals: Steam, Oculus, and Microsoft and they each have their own standards. The Khronos Group is working towards designing a standard for VR and AR content across platforms which would help developers.

Intel wants to help drive the VR experience to do more, stimulate the user better, and drive their senses. As important as the visuals are, the audio is what will make you really feel like you are there. If you can fool your eyes and your ears, then there are a few other things you can do to be truly and completely immersed, such as sensors to see where you are looking so you can interact with others in the real word, haptic components that are part of the physical and virtual world, and have the environment respond to the user – for example if your heart rate doesn’t go up as three zombies come towards you, then the game could send 100 zombies at you instead.

Intel has a 2020 VR Vision for open innovation and best of class performance to allow VR to thrive and lead on the PC. They want to see truly immersive multiplayer gaming allowing users to travel to other worlds through mind-tripping quality, wireless HMDs and peripherals, multi-sensory stimulation, multi-model input, sensing and responsive simulation.

As part of Intel’s developer outreach program, and to raise awareness of their VR community, Intel was one of the sponsors for the October meetup and also provided a Samsung* HMD Odyssey Windows Mixed Reality Headset with Motion Controllers to be raffled off to attendees.

Intel was one of the sponsors for the October meetup as part of their developer outreach program and to raise awareness of their community as they work toward improving the experience of AR and VR. 

To learn more:

Intel’s® Virtual Reality program

Intel’s® Software Innovator Program

Portland VR meetup

Intelligent Infrastructure for Smart Cities enabled using Intel and GE Predix

$
0
0

I recently had the pleasure of talking to Priyanka Bagade about a really interesting project that she created – the Intelligent Infrastructure for Smart Cities demo. This demo focuses on how to build a smart infrastructure system enabled by Intel gateways and the GE Predix cloud by leveraging existing infrastructure in a city. If you want to watch that interview you can watch it here: https://www.youtube.com/watch?v=qKM2NZWYFn0

The central idea behind the demo is that we can make our cities smarter by retrofitting existing infrastructure by interfacing it with an Intel powered gateway. Priyanka modeled two common infrastructure elements present in most cities. The first are induction sensors under roads which are used to let steer lights know there is a car in the intersection, the second being traffic cameras.

The induction sensors were modeled by using Grove hall effect sensors under the road – the model cars each have a magnet in the base to set off the hall effect sensor. The traffic camera was modeled using a webcam. These infrastructure elements were connected to an Intel i5 powered NUC gateway.

The hall effect sensors counted the number of cars that passed through the intersection, recording it locally on the gateway. The webcam image was processed on the gateway using OpenCV, pulling out real time traffic speed data from the image. OpenCV is an open source computer vision framework originally developed at Intel in 2000 but has since then become the standard in open source computer vision software.  

 

Once the data is collected locally it is displayed using a custom web interface and sent to GE Predix cloud securely via Predix Machine. The Predix Analytics service uses this data for historical analysis supporting traffic management in the city as well as predictive management of the smart infrastructure. Transportation is one use case but the applications of the data are endless; including areas like smart parking, pedestrian safety and environmental planning.

Overall this demo really shows the power of using existing technologies in new and creative ways, and also the power that comes from combining edge compute with cloud compute in a single application. 

Improve Deep Learning Performance, Enable Inferences on FPGAs with Intel® Computer Vision SDK Beta R3

$
0
0

Intel Computer Vision SDK usages

The R3 Beta Release Provides New Deep Learning Capabilities, Frameworks Support, & Performance Improvements

Software developers and data scientists working on computer vision, neural network inference, and deep learning deployment capabilities for smart cameras, robotics, office automation, and autonomous vehicles can accelerate their solutions across multiple types of platforms: CPU, GPU, and now FPGA. The new Intel® Computer Vision SDK Beta R3 (Intel® CV SDK) delivers support on select Intel® Arria® FPGA platforms. This latest toolkit also improves other deep learning and traditional computer vision capabilities; expands support for custom layers, fp16, and topology level tuning in Caffe* framework models; and adds technical preview for importing TensorFlow* and MXNet* framework models. 

Download Now

Get more details about new and enhanced features below.

 

Introducing FPGA SupportIntel Arria 10

The Intel® CV SDK Beta R3 release now supports Convolutional Neural Network (CNN) workload acceleration on target systems with an Intel® Arria® FPGA 10 GX Development Kit, where using the SDK's Deep Learning Deployment Toolkit and OpenVX™ delivers inferencing on FPGAs.

A typical computer vision pipeline with deep learning application may consist of vision functions (vision nodes) and CNN nodes. The Intel CV SDK software package includes the Model Optimizer utility, which can accept pre-trained models from popular deep learning frameworks such as Caffe, TensorFlow and MxNet. The software generates Inference Engine-based CNN nodes in C-codes.

Developers can combine the Inference Engine-based CNN nodes with other vision functions to form a full computer vision pipeline application. The CNN nodes are accelerated in the FPGA add-on card, while the rest of the vision pipelines are executed on the host Intel® architecture processor.
 

Computer Vision Pipeline Application


To learn more about new features that include enhanced deep learning, supported topologies, and improved performance, see the Intel® Arria® FPGA Support Guide. For deeper technical details and access to collateral, contact your Intel representative or send us an email.

Share Your Insight on FPGA Topologies Needed

Intel is interested in connecting with customers using new FPGA features, and about which FPGA topologies are most needed. Customers are asked to connect with Intel at our public Computer Vision SDK community forum, or by email.

 

Optimize Deep Learning

The Intel CV SDK Beta R3, which contains the Deep Learning Deployment Toolkit, also provides new capabilities and additional framework models support so users have more usage opportunities. It:

  • Supports custom layers, fp16 and topology level tuning for Caffe framework models. This means that this optimized framework has near universal single and object recognition in the Inference Engine for high performance and portability across multiple types of Intel platforms. 
  • Technical preview: Adds new import capabilities for TensorFlow and MXNet framework models into the Deep Learning Deployment Toolkit Inference Engine. More details can be found in documentation.
  • Adds new capabilities and code samples for Neural Style Transfer and Semantic Segmentation topologies.

Deep Learning enhancements include functions running on an Intel GPU, along with strong performance improvements:

  • Provides an auto-tuning mechanism for choosing the best kernel/primitive implementation for a given Intel GPU.
  • Delivers performance improvements of up to 60+ percent1 for select topologies (PVANET, Resnet50, Googlenetv3, b32) and batch sizes (SSD_VGG batch 1) with new primitives. (Source: Intel Corporation.)1

 

Traditional Computer Vision Enhancements

  • Delivers enhanced memory footprint performance for OpenVX pipelines. 
  • Supports Khronos OpenVX Neural Networks Extension 1.2 and is compatible with Ubuntu*, CentOS* and Yocto* OSes when deployed on an Intel CPU. 
  • Create OpenVX applications easier through using a new Eclipse* plugin for a fully integrated development system. Create a new OpenVX project, add graphs and edit with graph designer, generate code automatically when modifying graphs, and profile and debug graphs.

Download the new Intel CV SDK Beta R3 now.

Resources

 

1Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark & MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. Benchmark Source: Intel Corporation. Optimization Notice: Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User & Reference Guides for more information regarding the specific instruction sets covered by this notice. Notice Revision #20110804

OpenVX and the OpenVX logo are trademarks of the Khronos Group Inc.

Intel® Software Innovator Adam Ardisasmita: Teaching Kids through Educational Games

$
0
0

Growing up with games and wanting to learn how they work is what started Intel® Software Innovator Adam Ardisasmita’s journey in game development. Now he spends his time developing educational games to teach young children and working to grow the game developer community in Indonesia.

Tell us about your background.

I graduated cum laude with a degree in Information and System Technology from Institut Teknologi Bandung, the best IT University in Indonesia. In my final year at university, I co-founded and became CEO of Arsanesia, a mobile game developer company based in Bandung. I am also a tech blogger and deeply passionate about growing the game developer ecosystem in Indonesia through community.

How did you get started in technology?

Video games are what inspired my love of technology. I was highly influenced by the video games I played on both PC and console as a kid. One of the first games I remember playing was Lode Runner. The development of technology behind video games has always amazed me and inspires me to be able to combine innovation with gaming.

What projects are you working now?

Right now I’m working on a project to help improve education through games and technology. It is under a special business unit of our company called Arsa Kids. We are experimenting in various technology to make a bigger impact on education. We make a virtual reality (VR) game to help children learn about constellations and a mobile game to improve skills and knowledge in younger children.

Tell us more about Arsa Kids.

Arsa Kids was born in 2015 and focuses on early childhood education for 3-6 year olds. We make games for kids so that they can play and learn using a smartphone. Recently we began looking to work more directly with schools, educators, and the government to create a more broad and integrated product. But that is still in development right now.

Tell us about a technology challenge you’ve had to overcome in a project.

Using technology as our main focus is exciting and also challenging. I am able to play on the latest technology and also prototyping a lot of cool ideas, but on the other hand the technology itself is still not mature. For example, when we are talking about VR, there are still a lot of challenges for us to be able to deliver an immersive experience to our user. We still need to use highly specific hardware and a super optimized code to make sure our VR game is enjoyable. On the hardware side itself there are still a lot of limitations in terms of sensors, mobility, and the most crucial part is the price. The cost of hardware not only makes it difficult for us to experiment on new technology, but also affects our customers as the more costly the equipment, our target market shrinks to those who are able to afford to buy the hardware.  

What trends do you see happening in technology in the near future?

I believe augmented reality (AR) will be one of the biggest things happening in the future. I think that the fact that both Apple and Google are now gearing up their devices with AR technology, with the Apple* ARKit and Google* ARCore, is a sign of this being a driver of where technology is heading. It will make AR become more mainstream and will also create momentum for mixed reality (MR) and VR to become more popular. Besides that, I also believe that big data and machine learning will be an essential part of the future in software development.

In Arsa Kids, we are trying to integrate real-world educational tools with digital tools.  I think one way is using AR. We are still doing research on which kind of integration will fit perfectly with AR for education. We are also working on building a new engine to utilize machine learning to be able to help analyze children who play and learn with our products. In the future, we want to be able to help both parents and educators to assess their student’s potential and passion.

How does Intel help you succeed?

One of the biggest impacts I’ve had has been the ability to network with other Intel Software Innovators from around the world. I can learn, get inspired, and also collaborate with some of the brightest minds out there. Access to new technology trends and hardware has helped me to prepare and invest on new projects so that we can be one of the early adopters on the latest cool things.

Want to learn more about the Intel® Software Innovator Program?

You can read about our innovator updates, get the full program overview, meet the innovators and learn more about innovator benefits. We also encourage you to check out Developer Mesh to learn more about the various projects that our community of innovators are working on.

Interested in more information? Contact  Wendy Boswell

Adobe Max Showcased Vivid Content and the Benefits of CPU Optimization

$
0
0

Adobe Max 2017 Post Show Attendees

More than ten thousand of the world’s best video editors, moviemakers, photographers, and digital artists descended on Las Vegas for Adobe Max, October 18-20, 2017. 

As mentioned in our pre-event blog post, Adobe’s annual customer and product conference showcased an entire year of our collaborative work around CPU optimizations, partner relationships, and more.

Here are some of the highlights that impressed me most at this year’s event:  

  • Microsoft* and HP* launched powerful PCs designed specifically for digital content creation (DCC), adding features creators love such as fast Intel CPUs, touch screens and controller dials for easy editing.
  • Adobe announced the release of Adobe* Dimension* 3D modeling and creation software, which scales beautifully to CPU cores and was a key Intel booth demo.
  • Intel showed one of the first US public demos of new VR editing functionality integrated into Adobe Premiere* Pro. 
  • Incredible Intel processor performance scaling was demonstrated in our booth, the result of years of engineering work between the companies: 
    • 8K RED* digital camera content rendering at 60fps using all 36 “Skylake-X” Intel® Core™ i9 processor threads in Premiere Pro
    • Adobe Dimension 3D rendering scaling almost linearly from quad core to 16 cores.
    • The first ever public showing of the Adobe DNG* raw image converter with new optimizations showing nearly an order of magnitude improvement on the same 16-core system with optimized vs. non-optimized code.
      CPU Photoshop demo
  • Adobe announced a new version of Photoshop* that adds 360 panorama support and Microsoft* Dial support, and they previewed its upcoming “Deep upscale” feature.
  • Sensei* AI everywhere -- Adobe’s AI framework was in almost every major keynote tech demo, and it underpins 15 Adobe applications today. Intel looks forward to working with Adobe to make sure Sensei runs great on Intel.

    Sneaks event
  • The Adobe Sneaks event showed off exciting “in the labs” demos of potential future products and technology features. Many show promise for scaling to super-fast Intel processors. For example, Cloak removes unwanted images frame by frame from video, Physics Pak uses physics to scale objects and create images, Sidewinder is a VR stereo depth application, and DeepFill is a content aware image fill tool.

    Adobe Max Showcase Dunkirk Demo
  • Intel showcased the Dunkirk* VR experience at our booth to demonstrate how Virtual Reality drives totally new experiences demanding great performance on both the creation and the consumption side.
  • Adobe announced that there are 100M assets available on Adobe Stock* (2D, 3D, video, and templates), providing assets across all workflows and Adobe applications. The sheer number of quality assets feeds the demand for fast Intel-based systems to plow through all the creativity being thrown at them.

Thank you to everyone who helped make Adobe Max such an exciting event for Intel and our developer ecosystem! 

For more information on VR content optimization, visit the Intel Developer Zone VR page.

Intel Partner China Railway Corporation is Named Finalist for the SUPERUSER Award

$
0
0

Intel Partner China Railway Corp Intro Image
Image credit: LA Times

The China Railway Corporation was named by the SUPERUSER Editorial Advisory Board as a finalist for the SUPERUSER Award to be presented at the OpenStack Sydney Summit, November 6-8, 2017.

For public transportation and freight transport, China Railway Corporation enjoys a tremendous business scale. In 2016, the daily average passenger transport volume was 7.58 million persons, and the daily average cargo transport volume was 7.26 million tons. The train ticket website has more than 300 million registered users, and during the 2017 Spring Festival travel rush, the volume of online train ticket sales reached 9.33 million in a single day, with a daily average amount of 7.69 million tickets.

China Railway is a state-owned enterprise with exclusive control of railway transport, including operation and management of passenger and freight transport, management of related construction projects, and national railway transport safety. The enormous scale and the ever-increasing passenger and freight volume requires solid technical support from China Railway’s IT department.

That IT infrastructure is transforming to become more efficient, flexible, easy to deploy, secure and controllable. Its aim is to provide customers with more convenient information inquiries, online ticket purchasing, electronic payments and other network services. In addition, requirements such as business innovation and application innovation support management transformation from a planned economy to a market economy.

The IT transformation makes use of innovative technologies such as cloud computing, big data analysis, Internet of Things, and the mobile Internet. Building and using the railway’s cloud platform provides a more efficient, convenient, and energy-efficient IT infrastructure. The railway’s basic requirements for a cloud platform are stability, reliability, usability and compliance with business and state regulatory requirements.

To support the China Railway’s transformation from a traditional passenger and freight transport enterprise to a modern logistics enterprise, China Railway decided to develop a cloud computing solution. The corporation is deploying more than a dozen applications in five major categories on the China Railway Cloud, such as passenger transportation, freight, scheduling, and infrastructure. The overall application migration and deployment are mostly completed and in production.

In 2014, China Railway started developing its open source cloud solution based on OpenStack. In the process, China Railway has contributed 734 patch sets, 5,979 lines of code, and submitted and resolved 47 bugs for the OpenStack community.

OpenStack is the key to transforming the China Railway information system. The adoption of OpenStack marked the first time China Railway has fully embraced open source technology. It saved the corporation millions of dollars and the application launch cycle was shortened from several months to a day or two, enabling much quicker response to business requirements. In addition, cloud computing has improved resource utilization, with data center energy consumption being cut by approximately 50 percent.

In addition to OpenStack, the China Railway Cloud depends on KVM, OpenVSwitch/LinuxBridge, Hadoop, Kafka, Flume, Spark, CentOS, LXC, Docker, Kubernetes, OpenShift, Ceph, GlusterFS, Redis, MongoDB, MySQL/MariaDB, Ansible, Open-Falcon, ELK, ZeroMQ/RabbitMQ, and other open source software.

China Railway has deployed about 5,000 Intel® Xeon® processor-based server nodes, including about 800 KVM nodes and about 730 VMware nodes; 20PB SAN storage, 3PB distributed storage (Ceph). An additional 2,000 Intel Xeon processor-based server nodes are to be deployed by the end of 2017.

The cloud platform, with a scale of 800 physical nodes, hosts thousands of VMs and a dozen mission critical applications. It powers the production of 18 railway bureaus and more than 2,000 railway stations. The OpenStack cloud platform was able to handle the huge pressure of the Spring Festival peak to the system, with more than 31 billion daily average page views, as it also supported stable, safe and uninterrupted 24/7 operation of real-time dispatching management for all the trains, locomotives and vehicles.

The China Railway team is innovating with OpenStack to improve information system management as well as the customer experience. Based on open source components, China Railway developed the Operation Management System (OMS) to complement the cloud software. OMS includes monitoring, automation and analysis services. China Railway has verified the stability of operating 800 servers hosting 100,000 VMs in the same region with high-availability control nodes.

In addition, China Railway modified front-end functions to optimize the customer experience and added operation type logs for easier archiving by administrators, as well as permissions control, failover functions, and more.

Experts from Intel’s Open Source Technology Center worked with the China Railway IT department to test, tune and optimize the performance of the China Railway Cloud infrastructure. Testing and verification were done to check the maximum scale that China Railway Cloud could handle under a single-region deployment mode and find an efficient and optimized architecture design solution for it. The configuration was optimized to ensure that the cloud services could run effectively, stably and reliably in ultra-large-scale deployments and heavy-load situations.

Tests and optimizations were conducted on the control plane, data plane and the operations of critical business applications on the China Railway Cloud. The team found performance bottlenecks and optimized them, achieving the maximum performance of the hardware platform, verifying China Railway Cloud’s performance and stability, and also providing a reference architecture for production deployment.

We are excited that China Railway cloud has been nominated for the SUPERUSER Award, and we look forward to a great OpenStack Summit in Sydney. 

For those attending Summit, or those who want to follow-up afterwards, click here for information on the Summit session about China Railway’s OpenStack journey. 

© 2017, Intel Corporation. Intel, the Intel logo, and Xeon are trademarks or registered trademarks of Intel Corporation.  Other names may be claimed as the property of others.


MeshCentral2 - New Multi-OS Routing Tool

$
0
0

MeshCentral2 is an open source web based remote computer management web site. It provides many features on the web page including remote desktop, files access, remote terminal and much more. However, MeshCentral2 is also a powerful server for connecting any TCP connections over the Internet. This is super useful when doing RDP, SSH, SCP or running any custom tools. Imagine using MeshCentral2 to port map any port on your local computer to any TCP port on any managed computer anywhere on the Internet. This works across proxies, NAT’s and firewalls.

Today, I am announcing MeshCentral2 v0.1.0-f on NPM with the new MeshCommand (MeshCmd) tool. The first feature of the new tool is TCP port mapping, and it is multi-OS running on Windows and many variants of Linux. We have a new demonstration video showing how the tool works. The MeshCmd can be downloaded from an installed MeshCentral2 web site, and used to freely route TCP connections. It’s easy to use and has plenty of interesting applications.

While MeshCommand is interesting, it hide something even more amazing: The way it was built. The MeshAgent2 executable used to manage computers with MeshCentral2 is in reality a light agent with a JavaScript hosting environment. All the smarts is pushed from the server in the form of a JavaScript file. This is already a game changer for computer management. But, there is the additional secret… if you append JavaScript to the MeshAgent2 executable, the agent will run it like a local tool. So, creating new cross platform tools in MeshCentral2 is just a question of appending the right JavaScript to the MeshAgent2 executable. All OS’s that MeshAgent2 is compiled on can also run the new MeshCmd and much more in the future. In fact, MeshCentral2 appends the JavaScript on the fly when you download MeshCmd.exe.

MeshCentral2 is pretty sweet since it’s coded in JavaScript on the browser, server, agent and in tools. Except the agent itself, it’s one language across all components. Fully cross-platform in all cases. Many thanks to Bryan Roe that been working like crazy on the MeshAgent2 and making all of this possible. MeshCentral2 is still in beta and should not be used in production environments.

Enjoy!
Ylian
Previous blogs: http://www.intel.com/software/ylian
MeshCentral2: http://www.meshcommander.com/meshcentral2

MeshCommand demonstration: https://www.youtube.com/watch?v=S38mg_BPe-M

IoT JumpWay Intel® Computer Vision SDK Windows Console TASS PVL Webcam Security System

$
0
0

Introduction

Here you will find a sample application for Techbubble Assisted Sight System Photography Vision Library (TASS PVL), a Computer Vision security system using Intel® Computer Vision SDK and an Intel® Edison board connected to the Internet of Things via TechBubble Technologies IoT JumpWay.

Once you understand how it works you are free to modify the app accordingly.

This project uses two applications:

  1. A Windows* Computer Vision application.
  2. A Node JS application on an Intel® Edison platform that receives commands to activate LEDs and a buzzer when known or unknown faces are detected

Software requirements

  1. TechBubble IoT JumpWay Node JS MQTT Client Library

  2. TechBubble IoT JumpWay WebSocket MQTT Client

  3. Intel® Computer Vision SDK for Windows 10

  4. Microsoft Vcpkg, Paho, Json

  5. Node JS

  6. Visual Studio 2017

Hardware requirements

  1. Windows PC with 6th Generation Intel® Core™ i-series processors with Intel® Iris® Pro Graphics and HD Graphics, In our example we are using an Intel® NUC NUC7i7BNH with Intel® Optane Memory.

  2. 1 x Intel® Edison board

  3. 1x Grove starter kit plus - Intel IoT Edition for Intel® Edison platform

  4. 1 x Blue LED (Grove)

  5. 1 x Red LED (Grove)

  6. 1 x Buzzer (Grove)

  7. 1 x Webcam

Before You Begin

There are a few tutorials that you should follow before beginning, especially if it is the first time you have used the TechBubble IoT JumpWay Developer Program. If you do not already have one, you will require a TechBubble IoT JumpWay Developer Program developer account, and some basics to be set up before you can start creating your IoT devices. Visit the following IoT JumpWay Developer Program Docs (5-10 minute read/setup) and check out the guides that take you through registration and setting up your Location Space, Zones, Devices and Applications (About 5 minutes read).

Preparing Your Windows Device

  • Install Intel® Computer Vision SDK

  • Install Microsoft Vcpkg, Paho, Json

  • Install Visual Studio 2017

  • Install Paho MQTT

    C:\src\vcpkg> vcpkg install paho-mqtt:x64-windows

    Once installed, edit the MQTTAsync.h and MQTTClient.h files in C:\src\vcpkg\installed\x64-windows\include.

    Change:

    #if defined(WIN32) || defined(WIN64)
        #define DLLImport __declspec(dllimport)
        #define DLLExport __declspec(dllexport)
    #else
        #define DLLImport extern
        #define DLLExport  __attribute__ ((visibility ("default")))
    #endif

    To:

    #if defined(_WIN32) || defined(_WIN64)
        #define DLLImport __declspec(dllimport)
        #define DLLExport __declspec(dllexport)
    #else
        #define DLLImport extern
        #define DLLExport  __attribute__ ((visibility ("default")))
    #endif
  • Install Nlohmann Json

    C:\src\vcpkg> vcpkg install nlohmann-json:x64-windows
  • Plug In Your Webcam

    Plug in your webcam and make sure that you have all of the relevant drivers installed for your machine to recognise the device.

Cloning The Repo

You will need to clone this repository to a location on your Intel® Edison platform. Navigate to the directory you would like to download it to and issue the following command, or use the Windows GitHub GUI.

C:\YourChosenLocation> git clone https://github.com/TechBubbleTechnologies/IoT-JumpWay-Intel-Examples.git

IoT JumpWay Device Connection Credentials & Settings

  • Follow the TechBubble Technologies IoT JumpWay Developer Program (BETA) Location Application Doc to set up your IoT JumpWay Location Application.

  • Setup an IoT JumpWay Location Device for TASS PVL, ensuring you set up your camera node,as you will need the ID of the camera for the project to work. Once you create your device, make sure you note the MQTT username and password, the device ID and device name exactly, you will also need the zone and location ID. You will need to edit your device and add the rules that will allow it to communicate autonomously with the Intel Edison board, but for now, these are the only steps that need doing at this point.

Follow the TechBubble Technologies IoT JumpWay Developer Program (BETA) Location Device Doc to set up your devices.

IoT JumpWay Device Creation Docs

  • Locate and update the following code in TASS-PVL-Windows-Console.cpp, and replace with your device settings.
    int IntelliLanLocation = 0;
    int IntelliLanZone = 0;
    int IntelliLanDevice = 0;
    int IntelliLanSensor = 0;
    std::string IntelliLanDeviceN = "YourIoTJumpWayDeviceNameHere";
    std::string IntelliLanDeviceU = "YourIoTJumpWayDeviceUsernameHere";
    std::string IntelliLanDeviceP = "YourIoTJumpWayDevicePasswordHere";
  • You may also need to edit this value, this allows the application to connect to your webcam, generally the value is either 1 or 0, in my case it is 1 which is the default setting in the provided application. If you receive an error on startup that the application cannot connect to your camera, you will need to modify this setting and ensure that you have installed all the required drivers for your camera.
    int camera = 1;

Additional Include Directories & Library Directories

If you installed the Intel® Computer Vision SDK to any directory other than the default (C:\Intel\CV SDK) you will need to update the Additional Include Directories & Library Directories settings to reflect your installed location.

Additional Include Directories

Additional Library Directories

Setting Up Your Intel® Edison Board

IoT JumpWay Intel® Edison Dev Kit IoT Alarm

The next step is to set up your Intel® Edison board so that TASS PVL can communicate with it via the IoT JumpWay. For this, we already created a tutorial, the IoT JumpWay Intel® Edison Dev Kit IoT Alarm, that will guide you through this process. The only difference is that you do not need to set up the Python commands application, as in this project, TASS PVL will replace the Python commands application, to save time, please only follow the steps for the Intel® Edison device Node JS application.

You will find the tutorial on the following link:

IoT JumpWay Intel® Edison Dev Kit IoT Alarm

Once you have completed that tutorial and have your device setup, return here to complete the final integration steps.

Setting Up Your Rules

You are now ready to take the final steps, at this point you should have everything set up and your Intel® Edison Dev Kit IoT Alarm should be running and connected to the IoT JumpWay waiting for instructions.

Next we are going to set up the rules that allow TASS PVL to control your Intel® Edison Dev Kit IoT Alarm autonomously. Go back to the TASS PVL device page and make sure you are on the edit page. Scroll down to below where you added the camera node and you will see you are able to add rules.

IoT JumpWay Intel® Edison Dev Kit IoT Alarm

The rules that we want to add are as follows:

  1. When a known person is identified, turn on the blue LED.

  2. When an unknown person is identified, turn on the red LED.

  3. When an unknown person is identified, turn on the buzzer.

The events are going be triggered by warning messages sent from TASS PVL, so in the On Event Of drop down, select WARNING. Then you need to select the camera node you added to the TASS PVL device, as this is the sensor that the warning will come from. Next choose RECOGNISED in the With Warning Of, which will mean that the rule will be triggered when the IoT JumpWay receives a warning message that a known person has been identified, then select the Send Device Command for the Take The Following Action section, choose the Intel® Edison board as the device, the blue LED as the sensor, toggle as the action and on as the command. This will then tell the board to turn on the blue light in the event of a known person being detected.

You should repeat these steps for the red LED and buzzer for the event of NOT RECOGNISED to handle events where an intruder, or unknown person is identified.

Ready To Go!

And that is it, if you have followed the tutorials correctly, you are now ready to fire up your new security system. Run the Windows console application to begin, and you will see the console window open up and a live stream of your camera, complete with bounding box and emotion status if happy. To train a known user, you simply need them to stand in front of the camera and click R to register their face, and S to save.

Viewing Your Data

When the program detects a known user or intruder, it will send sensor and warning data for the device it was captured from to the TechBubble IoT JumpWay. You will be able to access the data in the TechBubble IoT JumpWay Developers Area. Once you have logged into the Developers Area, visit the TechBubble IoT JumpWay Location Devices Page, find your device and then visit the Warnings & Sensor Data pages to view the data sent from the application.

IoT JumpWay Sensor Data

IoT JumpWay Warning Data

IoT JumpWay Intel® Computer Vision SDK Bugs/Issues

Please feel free to create issues for bugs and general issues you come across whilst using the IoT JumpWay Intel Examples. You may also use the issues area to ask for general help whilst using the IoT JumpWay Intel Examples in your IoT projects.

IoT JumpWay Intel® Computer Vision SDK Contributors

Adam Milton-Barker,  Intel Software Innovator

-NEW- 15.60.0.4849: Intel® Graphics Driver for Windows® 10 64-bit [6th, 7th & 8th Generation]

$
0
0

DRIVER VERSION: 15.60.0.4849

(Windows Driver Store Version 23.20.16.4849)

DATE: November 06, 2017

SUMMARY:

  • HDR Launch Driver
  • Windows* Mixed Reality Launch Driver

Here are some of the benefits for developers in driver 15.60.0.4849:

  • WDDM 2.3 driver.
  • Fall Creators Update features supported on Intel 7th gen and newer processor graphics including HDR and Mixed Reality.
  • Support for Wide Color Gamut
  • Enables 10-bit HDR playback over HDMI
  • Enables video processing and video decode acceleration in DirectX* 12.
  • Support for DXIL*, including DirectX* 12 Shader model 6.0 and 6.1. This allows applications to use shaders compiled with the LLVM-based HLSL compiler from Microsoft*. In particular, it supports Wave Intrinsics, allowing fast sharing of data within SIMD execution.
  • Improved memory usage in OpenCL* applications

Netflix* HDR and YouTube* HDR are available for the first-time ever on the PC, on Intel® Graphics!

This driver enables the Microsoft Windows® 10 Fall Creators Update, thereby providing support for users to experience HDR playback & streaming on systems with Intel® UHD Graphics 620 and Intel® HD Graphics 620 or better, enjoyed on HDR capable external monitors and TVs. For more info please see the whitepaper here.

Escape the everyday to a world even beyond your imagination! With Windows* Mixed Reality, you can explore new worlds, travel to top destinations, play exciting games, lose yourself in the best movies and entertainment and more. With a headset and a Windows* PC powered by a 7th Gen Intel® Core™ i5 processor with Intel® HD Graphics 620 or better, you can go places you’ve always dreamed - without even leaving home!

This new WDDM 2.3 driver also provides security fixes, support for Wide Color Gamut, enables 10-bit HDR playback over HDMI, and enables video processing and video decode acceleration in DirectX* 12. For a full list of Microsoft Windows® 10 Fall Creators update features, please see here.

Experience the magic of The LEGO® Ninjago® Movie™ Video Game on processors with Intel® HD Graphics 620 or better.

Battle singing Orcs and score legendary goals while enjoying performance optimizations and playability improvements in the newly released Middle-earth: Shadow of War* and Pro Evolution Soccer 2018 on Intel® Iris® Pro Graphics. Take on The Cabal* and keep the enemy in your sights in these newly released, fast action-packed sequels to legendary favorites, Destiny 2*, Call of Duty®: WWII, and Divinity: Original Sin 2* on Intel® Iris® Pro Graphics.

Check out the all new look and feel of gameplay.intel.com, where you’ll find recommended game settings for many of your favorite PC games.

Install the Intel® Driver & Support Assistant (previously called Intel® Driver Update Utility), which now automatically checks for drivers on a regular basis and can provide notifications when new drivers are available.

Windows Mixed Reality requires a compatible Windows® 10 PC and headset, plus the Windows® 10 Fall Creators Update; PC requirements may vary for available apps and content.

This document provides information about Intel’s Graphics Driver for:

  • 8th Generation Intel® Core processors with Intel® UHD Graphics 610, 620, 630.
  • 7th Generation Intel® Core processors, related Pentium®/ Celeron® Processors, and Intel® Xeon processors, with Intel® Iris® Plus Graphics 640, 650 and Intel® HD Graphics 610, 615, 620, 630, P630.
  • 6th Generation Intel® Core processors, Intel Core™ M, and related Pentium® processors, with Intel® Iris® Graphics 540, Intel® Iris® Graphics 550, Intel® Iris® Pro Graphics 580, and Intel® HD Graphics 510, 515, 520, 530.
  • Intel® Xeon® processor E3-1500M v5 family with Intel® HD Graphics P530
  • Pentium®/ Celeron® Processors with Intel® HD Graphics 500, 505

CONTENTS OF THE PACKAGE:

  • Intel® Graphics Driver
  • Intel® Display Audio Driver (upgraded to v10.24.00.01)
  • Intel® Media SDK Runtime
  • Intel® OpenCL* Driver
  • Intel® Graphics Control Panel
  • Vulkan* Runtime Installer

Operating System Support

On 8th Generation Intel® Core processors, 7th Generation Intel® Core processors, 6th Generation Intel® Core and Intel® Mobile Xeon processors and related Pentium/Celeron:

  • Microsoft Windows® 10 64-bit

NEW FEATURES:

Microsoft Windows® 10 Fall Creators update features, found here.

Support for DXIL*, including DirectX* 12 Shader model 6.0 and 6.1. This allows applications to use shaders compiled with the LLVM-based HLSL compiler from Microsoft*. In particular, it supports Wave Intrinsics, allowing fast sharing of data within SIMD execution.

KEY ISSUES FIXED:

Graphical anomalies may be observed in Divinity: Original Sin 2*, Pro Evolution Soccer 2018*, Blu-ray* playback via Cyberlink PowerDVD*

Intermittent crashes or hangs may be observed in DOTA 2* (Vulkan* version), when switching to a lower resolution on an embedded display panel

Performance optimizations and playability improvements in Middle-earth: Shadow of War*

Improved memory usage in OpenCL* applications

Security improvements

SUPPORTED PRODUCTS:

HARDWARE

All platforms with the following configurations are supported:

Intel® Graphics1

DirectX*2

OpenGL*

OpenCL*

Vulkan*

Intel® Quick Sync Video

8th Generation Intel® Core™ processors with Intel® UHD Graphics 610/620/630

12

4.5

2.1

1.0.61

Yes

7th Generation Intel® Core™ processors with Intel® Iris® Plus Graphics 640/650

12

4.5

2.1

1.0.61

Yes

7th Generation Intel® Core™ processors with Intel® HD Graphics 610/615/620/630

12

4.5

2.1

1.0.61

Yes

Intel® Xeon® processor E3-1500M v5 family with Intel® HD Graphics P630

12

4.5

2.1

1.0.61

Yes

Pentium Processors with Intel® HD Graphics 610

12

4.5

2.1

1.0.61

Yes

6th Generation Intel® Core™ processors with Intel® Iris® Pro Graphics 580

12

4.5

2.0

1.0.61

Yes

6th Generation Intel® Core™ processors with Intel® Iris® Graphics 540/550

12

4.5

2.0

1.0.61

Yes

6th Generation Intel® Core™ processors with Intel® HD Graphics 520/530

12

4.5

2.0

1.0.61

Yes

Intel® Xeon® processor E3-1500M v5 family with Intel® HD Graphics P530

12

4.5

2.0

1.0.61

Yes

Intel® Xeon® processor E3-1500M v5 family with Intel® Iris Pro Graphics P580

12

4.5

2.0

1.0.61

Yes

Intel® Core™ M processors with Intel® HD Graphics 515

12

4.5

2.0

1.0.61

Yes

Pentium® and Celeron® Processors with Intel® HD Graphics 500/505

12

4.5

1.2

1.0.61

Yes

Pentium  Processors with Intel® HD Graphics 510

12

4.5

1.2

1.0.61

Yes

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Note:                                                                                                                    

  1. If you are uncertain of which Intel® processor is in your computer, Intel® recommends using the Intel Processor Identification Utility or Install the Intel® Driver & Support Assistant (previously called Intel® Driver Update Utility) to identify your Intel® processor.
  1. In the Intel® Iris® and HD Graphics Control Panel (under Options > Options menu > Information Center), the ’Installed DirectX* version’ refers to the operating system’s DirectX version. The Information Center’s ‘Supported DirectX* Version’ refers to the Intel® Graphics Driver’s supported DirectX version. The DirectX 12 API is supported but some optional features may not be available. Applications using the DirectX 12 API should query for feature support before using specific hardware features. Please note that DirectX12 is only supported on Windows 10© and DirectX11.3 support is also available on supported Microsoft* operating systems.

 

KNOWN ISSUES

  • Graphics anomalies may be observed in Assassin’s Creed® Syndicate, Titanfall® 2 (6th Gen Intel® Core™ Only), Paragon*, Elex* and other games
  • Intermittent crashes or hangs may occur in Forza Motorsport 7* (if run on systems with less than 12GB RAM), The Surge*, Tom Clancy’s The Division* (DX12*), The Guild 3*, SiSoft Sandra Benchmark*, Rise of the Tomb Raider*, Handbrake* (during AVC/HEVC transcode), or during playback of 3D video
  • Long load times may be observed for some tracks in Forza Motorsport 7*
  • Bezel value may not correctly increment/decrement when using bezel correction in collage mode

More on Intel® Core™ processors

For more information on the Intel® Core™ processor family, Intel® Xeon® processor E3 family, and 8th Generation Intel® Core processors, please visit:

Intel 8th Generation Core Processors

http://www.intel.com/content/www/us/en/processors/core/core-processor-family.html

http://www.intel.com/content/www/us/en/processors/xeon/xeon-processor-e3-family.html

http://www.intel.com/graphics

Work and play in high resolution with Intel® UHD Graphics, Iris® Graphics, Iris® Plus Graphics, and Iris® Pro Graphics. Watch captivating 4K Ultra HD (UHD) video on up to three screens, edit photos and videos like a pro, and immerse yourself in vividly rendered, seamless 3D gameplay - all with the added power boost of an Intel® Core™ processor. Intel® Graphics bring stunning visuals to thinner and lighter portable devices, like laptops, 2 in 1s, and desktop computers.

We continuously strive to improve the quality of our products to better serve our users and appreciate feedback on any issues you discover and suggestions for future driver releases. If you have an issue to submit, please follow the guidance found here Default level information for reporting Graphics issues.

*Other names and brands may be claimed as the property of others.

INTEL® XEON® SCALABLE PROCESSORS DELIVER A BIG BOOST IN SIMULATION PERFORMANCE

$
0
0


ANSYS Fluent* 18.2 Powered by Intel® Xeon® Gold 6148 Processors
Takes Complex Manufacturing Simulation to the Next Level

When it comes to simulation technologies, engineers and designers always want more: more granular detail, more variables, greater accuracy, and faster time to results.  Now, thanks to a collaboration between Intel and engineering simulation software provider ANSYS, they’re getting a major leap in performance.

The Intel® Xeon® Gold 6148 processor– part of the new Intel® Xeon® Scalable processor family – boosts performance for ANSYS Fluent* 18.1 by up to 41 percent versus a previous-generation processor; and, it provides up to 34 percent higher performance per core,1 which helps contain licensing costs.

“ANSYS teamed with Intel to make sure software and hardware improvements go hand in hand,” said Dr. Wim Slagter, Director of HPC and Cloud Alliances for ANSYS.  “The latest combination of ANSYS Fluent 18.1 and Intel Xeon Gold 6148 processors is a clear testament of impressive overall performance gains achieved for customers who want to increase their engineering productivity.”

ANSYS Fluent is a versatile computational fluid dynamics (CFD) tool and multi-physics solver that’s widely used in a range of applications, including automotive, aerospace, academia, oil and gas, marine, and formula 1 racing.  Typical workload sizes range from 2 million to 500 million cells. 

An improved processor for greater simulations

The Intel Xeon Gold 6148 processor contains more cores, higher memory bandwidth, and an enhanced cache structure compared  to the previous-generation Intel® Xeon® processor E5 v4 product family.  Intel worked with ANSYS to optimize its Fluent 18.1 product for new hardware features, using Intel software development products to help ensure that the additional processing power would deliver significant performance gains for real-world simulations.

A key focus of the optimization effort was to improve vectorization in the solver code to leverage the advanced vector processing capabilities of Intel® Xeon® processors.  To verify performance with the new processors, benchmark tests were run using a variety of models targeting different industries and ranging in size from two million to 33 million cells. These benchmarks found that a two-socket server based on the Intel Xeon Gold 6148 processor can improve performance for ANSYS Fluent by up to 41 percent1 versus a previous-generation server based on the Intel® Xeon® processor E5-2697 v4, and by as much as 60 percent1 versus a comparable server based on the earlier Intel® Xeon® processor E5-2698 v3.


Learn more about ANSYS Fluent 18.1 optimized for the Intel Xeon Gold 6148 processor.

See the video ANSYS Takes Complex Engineering Simulation to the Next Level, describing the collaboration process.  

Read how Intel Xeon Gold 6148 processor boosts performance for ANSYS Fluent 18.1.

As a sponsor of the Intel® HPC Developer Conference ANSYS will be presenting this material live in Denver, CO on Saturday, Nov. 11, 2017.   

 

1 Source: Intel internal testing, March 2017. Baseline: 2x Intel® Xeon® processor E5-2698 v3 (16 cores, 2.3 GHz), 128 GB total memory (8x 16 GB @ 2133 MT/s DDR4), Red Hat Enterprise Linux* 7.3. Next-gen: 2x Intel® Xeon® processor E5-2697 v4 (18 cores, 2.3 GHz), 128 GB total memory (8x 16 GB @ 2400 MT/s DDR4), Red Hat Enterprise Linux* 7.3. New: 2x Intel® Xeon® Gold 6148 processor (20 cores, 2.4 GHz), 192 GB total memory (12x 16 GB @ 2666 MT/s DDR4), Red Hat Enterprise Linux* 7.3.

Live Video Stream Object Classification with Intel® Movidius™ NCS – Update 1

$
0
0

The goal of this project is to use an Intel® Movidius Neural Compute Stick (NCS) for object classification in live video streams. The main purpose of NCS is eliminating the need for retraining a neural network for machine learning applications. In other words, without the need for a super powerful computer, pre-trained models can be loaded onto an NCS and used during inference – the prediction step. The NCS is fairly small (72.5mm X 27mm X 14mm) which makes it perfect for low-power applications like drone navigation systems. In this report, I’ll explain the major steps that are necessary for identifying object in a live video stream using an NCS.

1. Installing the Intel® Movidius™ software developer kit (SDK)

I used the Ubuntu 16.04 operating system for installing the Intel® Movidiussoftware developer kit (SDK). I just followed the instructions which are pretty straightforward. However, one small point that might be helpful: if you have multiple versions of Python environments installed already, then make sure that they are assigned different names. In other words, instead of calling both of them Python, use a naming convention like Python2 and Python3 respectively. Otherwise, during installation, the SDK might have some difficulties identifying the Python environment.

2. Loading pre-included examples

Once the SDK is installed it’s time to try out some of the examples. You can just go to the example folder and start.

3. Object classification in video

For the sake of this report, I tried to capture live video from laptop camera and then load the pre-trained neural network model onto the NCS in order to classify objects in video frames. The model was already trained and included the SDK examples. The model’s name is GoogLeNet that is based on this paper. As mentioned earlier, one of the major benefits of NCS is, it does not need the model to be trained again. The model just gets loaded onto the NCS and it predicts the label of the object of interest in the frame.

Below we take a look at some of the objects that were shown to the camera and the corresponding model prediction. I just want to emphasize again that just the pre-trained model is loaded onto the NCS and the NCS is doing the prediction task (inference). The model makes the predictions and the top 5 results with highest probabilities are shown on the screen. The screenshots below show some of the examples of the model prediction along with their probability. The live video shows the full demo.

Note that I did use a sheet of white paper as the background to reduce any type of noise that might come from the environment.

Actual Label: eyeglasses

NCS model top predictions: stethoscope, knot, sandal, whistle, sunglasses

Actual Label: remote control

NCS model top predictions: electric switch, mousetrap, band aid, remote, harmonica

Actual Label: computer mouse

NCS model top predictions: computer mouse, car mirror, lens cap, switch, spotlight

Actual Label: iPhone

NCS model top predictions: iPod, cell, ignitor, remote, pencil box

Actual Label: screw driver

NCS model top predictions: pencil eraser, screwdriver, mouse, paint brush.

Actual Label: notebook computer

NCS model top predictions: notebook computer, keypad, laptop, space bar, mouse

Actual Label: pen

NCS model top predictions: ball pen, paper knife, quill, screw driver

Actual Label: binder clip

NCS model top predictions: whistle, mousetrap, toaster, modem, traffic light

Analysis of the model prediction, especially, in cases that the model makes the wrong prediction shows that model is not far off from reality. For example for the case of Binder Clip the model predicted it as a mousetrap, which is not unreasonable if we think about it. Given that model might not have seen a binder clip in its training data.

4. Next steps

I would like to improve the model performance so that it can give me better accuracy next time. Possible improvement to the model could be pre-processing input images so that it can have a robust performance under different lighting situations. Another future task is, connecting NCS to a Raspberry PI camera because eventually, I want to use this NCS on a drone which might have a processing unit like a Raspberry PI. Finally, using a more powerful model that can deal with environment noise is another aspect to consider for the next step.

Autonomous UAV Control and Mapping in Cluttered Outdoor Environments – Update 1

$
0
0

Autonomous and intelligent flight under the canopy of densely forested areas is a challenging problem yet to be addressed. It consists of giving, to an unmanned aerial vehicle (UAV), the ability to decide which is the best flight route to be taken in an unseen environment. This decision is achieved by processing, frame-by-frame, the RGB image captured by the forward-facing camera.

The capability to perform autonomous and intelligent flights under the canopy are crucial for activities such as Search and Rescue (SaR) missions [1], visual exploration of disaster areas [2], [3], aerial reconnaissance and surveillance [4], [5], and assessment of forest structure [6], [7] or riverscape [8].

Currently, we are investigating the use of Deep Learning (DL) to teach the algorithm to recognize trails and possible obstacles. Our main challenges are the variance in luminance and environmental conditions that are usually present in unstructured environments such as densely forested areas (Figure 1).

Deep learning is a subset of Machine Learning (ML) methodologies[9], which aims to simulate the way a human brain processes and learns new information [9]–[11]. Usually, ML algorithms have input and output layers, whereby raw data can be transformed prior to being fed to the input layer. In contrast, DL algorithms may have one or more hidden layers between the input and output layers. Due to this, the algorithm is expected to extract features from the raw sensory information in multiple levels and without any preprocessing or filtering of the raw data [11].  As a result, high quality features are learned autonomously and efficiently [12].

Deep learning algorithms can be modeled as feed-forward or as recurrent neural networks. The former has no feedback connections, meaning no data is used to feedback to the model, while the latter includes feedback connections [12].

Feed-Forward Neural Networks (FFNN) consist of a significantly large number of processing units, also know as nodes, which are organized into layers. Each unit present in a layer is also connected to another unit from the previous layer. Typically, the memory model is simplistic, storing only the hierarchical features set (weights) and a few other parameters [13]. FFNN may share the same weight value or have different ones. Either way, the input data moves through the network, layer by layer, classifying the data and accumulating knowledge until it derives the output in the final layer [13].

The most common type of FFNN is the Convolutional Neural Network (CNN) due to its well-adaptable structure for image classification [13]. Originated in early 1990s with the development of LeNet, over the years due to advancements in computing power a growing number of CNN variants became available, such as: AlexNet (2012) [14], ZFNet(2013) [15], GoogleNet (2014)[16], VGGNet(2014), amongst others. Not surprisingly CNNs are also the most implemented model to train UAV control systems [17].

A current state-of-the-art paper [18] demonstrates a deep data-driven-sensory-motor system that estimates the approximated directions of the trail. It does that by processing frame-by-frame through a Deep Neural Network (DNN).

The DNN presented [18] receives an RGB input image and outputs three values which represent the probability of the trail being located on the left, center or right of the image. Contrary to the approach in [18], this project aims to investigate the performance of Inception Resnet V2 [19]  Network for the problem of trail identification. More details about the Inception Resnet V2 will be presented on the next post. For now, we refer the reader to [20].

During this project we will be using the Intel® Movidius™ Neural Computer Stick (NCS) and the Intel® Aero Ready to Fly Drone. During the first phase of the project we will be training our model using the publicly available IDSIA (Istituto Dalle Molle di Studi sull'Intelligenza Artificiale) dataset [18]. In the second phase data will be gathered using Intel's drone. Finally, in our third phase we explore the algorithm’s ability to reproduce the same trajectory previously recorded by the drone.

Our goal is to use the results and knowledge acquired during the simulation to form the base for further work, whereby we aim to expand the system into a real-world application.

Bibliography

[1]      D. Câmara, “Cavalry to the Rescue: Drones Fleet to Help Rescuers Operations over Disasters Scenarios.”

[2]      G. Rémy, S.-M. Senouci, F. Jan, and Y. Gourhant, “SAR.Drones: Drones for Advanced Search and Rescue Missions.”

[3]      L. Apvrille, T. Tanzi, and J. L. Dugelay, “Autonomous drones for assisting rescue services within the context of natural disasters,” in 2014 31th URSI General Assembly and Scientific Symposium, URSI GASS 2014, 2014.

[4]      A. Gaszczak, T. P. Breckon, and J. Han, Real-time People and Vehicle Detection from UAV Imagery. 2011.

[5]      A. Puri, “A Survey of Unmanned Aerial Vehicles (UAV) for Traffic Surveillance,” Tech. Pap., pp. 1–29, 2005.

[6]      L. P. Koh and S. A. Wich, “Dawn of drone ecology: low-cost autonomous aerial vehicles for conservation,” Trop. Conserv. Sci. Mongabay.com Open Access J. -Tropical Conserv. Sci., vol. 55, no. 52, 2012.

[7]      L. Wallace, A. Lucieer, Z. Malenovsk???, D. Turner, and P. Vop??nka, “Assessment of forest structure using two UAV techniques: A comparison of airborne laser scanning and structure from motion (SfM) point clouds,” Forests, 2016.

[8]      J. T. Dietrich, “Riverscape mapping with helicopter-based Structure-from-Motion photogrammetry,” Geomorphology, 2016.

[9]      S. P. Antonio Gulli, Deep Learning with Keras, Nick McClure. Birmingham, 2017.

[10]    L. Tai and M. Liu, “Deep-learning in Mobile Robotics - from Perception to Control Systems: A Survey on Why and Why not,” 2016.

[11]    G. Zaccone, Getting Started with Tensorflow. Packt Publishing, Limited, 2016.

[12]    A. C. Ian Goodfellow, Yoshua Bengio, Deep Learning. London: MIT Press, 2016.

[13]    S. Krig, Computer Vision Metrics: Textbook Edition. Springer International Publishing, 2016.

[14]    Michael A. Nielsen, “Neural Networks and Deep Learning,” 2015. [Online]. Available: http://neuralnetworksanddeeplearning.com/. [Accessed: 04-May-2017].

[15]    A. Krizhevsky and G. E. Hinton, “ImageNet Classification with Deep Convolutional Neural Networks,” pp. 1–9.

[16]    M. D. Zeiler and R. Fergus, “Visualizing and Understanding Convolutional Networks,” 2012.

[17]    C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going Deeper with Convolutions,” 2014.

[18      A. Giusti, J. Guzzi, D. C. Cirean, F.-L. He, J. P. Rodríguez, F. Fontana, M. Faessler, C. Forster, J. Schmidhuber, G. Di Caro, D. Scaramuzza, and L. M. Gambardella, “A Machine Learning Approach to Visual Perception of Forest Trails for Mobile Robots,” IEEE Robot. Autom. Lett., pp. 2377–3766, 2015

[19]    K. Kelchtermans and T. Tuytelaars, “How hard is it to cross the room ? - Training (Recurrent) Neural Networks to steer a UAV,” 2017.

[20]    C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi, “Inception-v2, Inception-ResNet and the Impact of Residual Connections on Learning,” in Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17), 2017, pp. 4278–4284.

Follow-Up: How does Microsoft Windows 10 Use New Instruction Sets?

$
0
0

Numbers cover image for Jakob

In my previous blog post “Question: Does Software Actually Use New Instruction Sets?” I looked at the kinds of instructions used by few different Linux* setups, and how each setup was affected by changing the type of the processor it was running on.  As a follow-up to that post, I have now done the same for Microsoft* Windows* 10. In this post, I look at how Windows 10 behaves across processor generations, and how its behavior compares to Ubuntu* 16. I also went back to Ubuntu to look at the instruction usage in different usage scenarios. 

Experimental Setup

Just like before, I took our “generic PC” platform in the Wind River® Simics® virtual platform tool, and ran it with two different processor models. One model was for an Intel® Core™ i7 first-generation processor (codenamed “Nehalem”), and the other model was for an Intel Core i7 sixth-generation processor (codenamed “Skylake”). 

Using these different models, I booted a Windows 10 image (build 1511, to be precise). Just like before, I ran through the first 60 seconds of the boot, ending up at an idle desktop. 

Here is screenshot from my laptop booting a couple of Windows 10 targets to gather the statistics:

Instrumentation Windows 10 Gratuitous Screenshot

During the boot, I used the Simics tooling environment to collect statistics on the types of instructions seen. I counted the instruction usage per mnemonic, just like in the previous post. In addition, I did some runs where I looked at the instruction stream based on other criteria, as discussed below. 

Instructions across Generations

First, I looked at all instructions that occur as one percent or more of the dynamic instructions during the boot. The results are shown in the graph below:

Instrumentation Windows 10 Instructions Total

What we see in the Windows data is very similar to what we saw for Ubuntu 16 in the previous blog post. There are some small changes between the processor generations, but mostly the same code is run regardless of the processor. 

This is not all that unexpected for broadly used general-purpose operating system distributions like Ubuntu and Windows. The most dramatic difference seen in the previous blog post between the processor generations was for the Yocto* Linux build, which makes sense. Yocto lets you build a Linux for yourself, and its defaults can be more aggressive in terms of including code to use new instruction sets, since you don’t generally have to support a wide user base. For Ubuntu and Windows 10 with their broad user bases and common goal to work reliably for a very large number of users, having too many differences between hardware generations would make testing and quality control harder. It makes sense not to aggressively optimize for a single system, unlike what you can do when you roll your own Linux. 

Anyway, if we look at the instructions used, the most common instructions are moves, compares, jumps, and basic arithmetic. This is very similar to what we saw on Linux, but the precise instructions used does differ a bit… 

Comparing to Linux 

When changing operating system as we did here, the compiler used to build the code changes (from gcc on Linux* to Microsoft* compilers on Windows), along with conventions around how function calls and operating system calls are done. All of this impacts the instruction selection process in the compiler, and as a result the actual instructions used can be rather different between the workloads. Indeed, we even see some instructions that are uniquely used in just one workload. We have some examples of this in our current dataset. 

Windows 10 never uses LEAVE or ENTER instructions. As discussed in my previous blog post, the old Linux 2.6 Busybox* setup did use LEAVE rather extensively, but more recent Linux distributions did not use it. Since Windows 10 is a rather recent software stack, it makes sense that it no longer uses even the LEAVE instruction. 

There are some instructions that Windows 10 uses, but that none of the Linux setups did. The most significant one is the MOVNTI instruction from the Intel® Streaming SIMD Extensions 2 (SSE2) instruction set. It makes up more than three percent of the instructions on Windows! In addition, outside of the most common instructions shown above, Windows 10 uses several unique vector instructions that none of the Linux variants used: PADDW, PSRLW, PMOVZXBW, PUNPCKHQDQ, PSUBW, and PMADDWD. Given the richness of the vector instruction sets, this is not really that surprising. 

The CMC (complement carry flag) instruction from the original 8086 instruction set is also used by Windows but not Linux.  The same holds for BSR (bit scan reverse) from the 80386. They are not exactly common (measured at less than 0.01%), but still interesting to see that they are never used in the Linux boots.

Note that this is just about the operating system boot process; the picture is likely to be different for application software. Indeed, I did some other experiments with Linux that showed that rather starkly, as discussed below. 

Vector and SIMD

Vector instructions are not used all that much during the boot. The difference between the v1 and v6 processors is not particularly big for Windows – it was much more pronounced on Linux. However, it gets more interesting when Windows 10 is compared to Ubuntu 16:

Instrumentation Windows 10 Vector Instructions

Overall, Windows and Ubuntu use the same proportion of vector instructions during the boot (roughly 5%). These vector instructions are distributed rather differently, though. Windows uses more SSE2 instructions, while Ubuntu uses more MMX instructions. Windows also does not change the instructions used quite as much between generations, with a barely perceptible use of Intel® Advanced Vector Extensions (AVX) on the v6 processor. 

A Few More Things

This investigation of instruction mnemonics is really just a simple example of what you can observe in a software run using instrumentation in a virtual platform. It is rather informative, but there is lot more that can be observed and counted. As a Simics user, you can program tools to collect pretty much anything you want to (as long as it is part of the virtual platform of course). 
As an example, here is the distribution of instruction sizes during the Windows 10 boot on the v6 processor:

Instrumentation Windows 10 Instruction Sizes

This works out to an average size of about 3.73 bytes per executed instruction. Note that this does not really say anything about the size of the code. It is rather an indication of the pressure that the code puts on the cache system and processor decoders. Intel® Architecture (IA) is a classic variable-length instruction set, which is clearly seen here with instruction lengths varying all the way from 1 byte to 14 bytes. It is worth noting that really long instructions are also really rare. 
Another way to slice the instructions is to look at the operand types along with the instruction opcodes. This is a more fine-grained split than the mnemonics used above. For example, the 20 most common variants of the MOV instruction are the following:

Instrumentation Windows 10 Mov Address Modes

And this is far from all of them… there are many other particular addressing modes being used. It is classic long-tail distribution: the most common modes make up by far the majority of all MOV operations, while the more complex modes are used often enough to still matter. 

Note that we are seeing moves of all sizes here: just because this is a 64-bit Windows operating system running on a 64-bit processor does not mean that all operations are actually 64 bits in size. Byte (8-bit), word (16-bit), and double-word (32-bit) operations are also being used. 32-bit is as common as 64-bit.

Vector and SIMD at the Desktop in Linux

When discussing these measurements with one of my colleagues, the question came up about vector instructions in general and AVX instructions and how they very much depend on the workload being used. An operating-system boot is not likely to use them for more than a little crypto and possibly some highly-optimized memory copy operations. But he had seen some other behaviors when using a system interactively. Thus, one more experiment was made, where I took the v6 processor with Ubuntu, and started to run some interactive software after the boot. Essentially, opening a terminal and starting a new Firefox process.

Instrumentation Ubuntu 16 Desktop Vector Instructions

As can be seen from the diagram, the desktop activity makes extensive use of AVX instructions – including even the rather new AVX2 instructions and FMA3 instructions. Vector instructions actually comprise more than 12% of all instructions executed- and remember that this includes all instructions in the whole machine, not just the user-level code or the code in the graphics subsystem. 

Conclusions

This was a second blog post with graphs and numbers detailing the different types of instructions being executed in a number of different workloads across different processor types. It is an set of data for a computer architecture nerd like me. However, the most interesting thing is how the numbers were collected – using Simics and its instrumentation capabilities. Simics can simulate pretty much any system, and allow for non-intrusive inspection and debugging. Collecting instruction statistics such as I did here offers useful insights for processor designers, software engineers, researchers, and students. 

A short plug here: Simics is available for free to universities, and it is a very versatile tool for subjects including computer architecture, operating systems, networking, embedded systems, simulation, virtual platforms, and low-level programming. This blog post could be seen as an example of an undergraduate computer architecture or assembly programming lab, 

 


Top Ten Intel Software Developer Stories | November

$
0
0

New IoT Developer Kits

Announcing Arduino Create* and the UP Squared* Grove* IoT Development Kit

Use this powerful combination of newly introduced hardware and software to assist you in building high performance commercial IoT solutions.


Choose the Right Processor

Which Intel® Processor is Right for Your VR Projects?

Compare the performance of the Intel® Core™ i7 processor against the Intel® Core™ i5 processor to determine which one is better suited for applications that use virtual reality (VR).


Vector API Developer Program

Vector API Developer Program for Java* Software

Get an introduction to Vector API and learn how to build it in Java* programs using this tutorial.


Intel Game Developer Program

Join the Intel® Game Developer Program

Learn more about this program and how it can help you with your game development goals.


Modern Code Jump Game

Modern Code Jump Game

Show your expertise with modern code and test your skills with this addictive new game.


UP Squared* Grove* IoT Development

UP Squared* Grove* IoT Development Kit

Find a rapid prototyping development platform that includes integrated software and end-to-end tools which will reduce development time for your intensive computing applications.


Immersive Gaming

Portland VR Meetup Recap: Immersive Gaming

October's meetup included a community of VR developers, entrepreneurs, engineers, artists, enthusiasts, and early adopters who discussed the future of immersive gaming using augmented reality (AR) and VR.


Proton Collisions

Track Reconstruction with Deep Learning

As part of the Modern Code Developer Challenge, Antonio Carta describes his work with the Compact Muon Solenoid, a general purpose detector used to detect particles generated by proton-proton collisions.


SMITE

We Will SMITE* Thee

Learn how the game SMITE elevated Hi-Rez* Studios to another level by allowing the business to engage directly with an enthusiastic audience.


Deep Mask Installation

DeepMask Installation and Annotation Format for Satellite Imagery Project

The process of training our computers to recognize different objects in given images is complex. Abu Bakr describes the process of transfer learning.


Intel® Developer Zone experts, Intel® Software Innovators, and Intel® Black Belt Software Developers contribute hundreds of helpful articles and blog posts every month. From code samples to how-to guides, we gather the most popular software developer stories in one place each month so you don’t miss a thing.   Miss last month?  Read it here. 

Top 10-icon

The Fab Five: Game Developer Content November

$
0
0

Adventure Pals

A Boy and His Giraffe

Read about the journey that started with a favorite illustration and turned into a quirky character game.


Adam Ardisasmita

Intel® Software Innovator Adam Ardisasmita: Teaching Kids through Educational Games

Inspired by a love of video games Software Innovator Adam Ardisamita is working to improve early childhood education using games and technology.


Pepper Grinder

Social Media Can Be an Indie’s Best Friend

Popularity on social media caused this game developer to concentrate on creating a drill- and digging-based game.


Cat Quest

Cat Quest: A Love Letter to Eastern and Western Role-Playing Games

After an initial foray into dancing cats, The Gentlebros* refined CatQuest to be a different kind of role-playing game (RPG)—one where cats are the dominant species.


Unreal logo

How to Get Started in VR with Unreal* Engine

Video and step-by-step instructions on how to create a VR experience using Unreal* Engine.


Get ready.  Get noticed. Get big.  Get news you can use by joining the Intel® Software Game Developer Program.

The Best of Modern Code | November

$
0
0

Elena Orlova

Deep Learning for Fast Simulation

Meet student Elena Orlova whose project is teaching algorithms to be faster at simulating particle-collision events.


in situ visualization

SDVis and In-Situ Visualization on Texas Advanced Computing Center's (TACC) Stampede

As data sizes outgrow disk I/O capacity, visualization will be increasingly incorporated into the simulation code (in-situ visualization). Learn how recent work at TACC in is addressing this need.


Konstantinos Kanellis

Cells in the Cloud: Scaling a Biological Simulator to the Cloud

Konstantinos Kanellis helps us to understand how distributed computing works—by breaking large scale complex simulation tasks into smaller parallel tasks.


Oceans of Data

Lab7 Systems* Helps Manage an Ocean of Information

With help from Intel® Parallel Studio XE, find out how Lab7Systems* is finding efficient ways to manage massive amounts of data making life easier for bioinformaticians, scientists, and IT teams.


Java

Intel Accelerates Hardware and Software Performance for Server-Side Java* Applications

Discover performance optimizations for Java* applications that run using the optimized Java Virtual Machine (JVM) and are powered by Intel Xeon® processors and Intel® Xeon Phi™ processors.

INTEL® GLOBAL IoT DEVFEST II: THE LEARNING LIVES ON

$
0
0

You Can Still Experience the IoT Skill-building Sessions,
Expert Presentations and Opportunities for Inspiration

If you missed the industry’s premier celebration of all things IoT – or, if you just want to relive its greatest moments – Intel invites you to view the talks online and enjoy the best of DevFest II.  This second virtual conference provided a global platform for IoT thought leaders, with continuous talks spanning two 16-hour days in early November.  

In addition to encore presentations of our many talks, the DevFest site also features a “welcome” video from Conference Chair Grace Metri, Internet of Things Community Evangelist at Intel, who introduces our four topic tracks and explains how to navigate to the IoT sessions you wish to view.

A big “thank you” to the more than 100 IoT professionals from 87 countries and 44 companies who participated – sharing their IoT journeys via keynotes, presentations and 1:1 mentoring.  DevFest II showcased cutting-edge research and real-world innovation as the developer community came together to advance the future of IoT applications.  

During the 32-hour continuous online training event, more than 100 IoT industry superstars delivered training and deep-dive presentations - followed by live Q&A sessions.  

A sampling of the key presenters included:
* Shai Monson, Manufacuring Domain Lead, Intel – “Intel Factory IoT Journey”
* David Formisano, Director of IoT Strategy, Intel – “Accelerating Solutions Deployment in the Rapidly Evolving World of IoT”
* Massimo Banzi, Co-Founder, Arduino – “Arduino Create, the simple path to Industrial IoT Development”
* Maciej Kranz, VP Strategic Innovation, Cisco – “Proven Roadmap to Develop a Successful IoT Journey”
* John Walicki, Watson Ecosystem, IBM – “IoT End to End: Turn your IoT Sensor Data into Insights”
* Lothar Schubert, Dir. Dev Relations and IoT, GE – “Industrial IoT: Building the Developer Ecosystem”
* Alex Wilson, Market Dev at Wind River – “Building Functional Safety Products with Wind River VxWorks RTOS”
* Faith McCreary, Ph.D, Principal Engineer UX, Intel – “UX Strategies for the New Ordinary: Designing for Privacy in the Age of Magic”
* Matthew Bailey, Global Ambassador, OpenFog – “Fog Computing is an imperative ICT technology for Smart Cities”
* Fabrizio Del Maffeo, Managing Director, AAEON – “From Getting Connected to Field Employment”
* Nick O’Leary, Co-Creator of Node-Red, IBM – “Wiring the Internet of Things with Node-RED”
* Jennifer Williams, Architecture/Development, Intel – “Enabling a Densely-Scalable Low-Power WSNs for Shipping and Industrial IoT”
* Rakesh Dodeja, Principal Engineer, Intel – “Containers to Deploy IoT Micro Services at the Fog and Edge Nodes”
* Christopher Kalkhof, IoT Business Development, Infosim – “Manage the World of IoT Gateways”
* Guy Vinograd, CEO, Softimize – “IoT = Device + Cloud. Best practices for End to End IoT Architecture” 
* Alok Batra, Chief Product Officer, Atomiton Inc. – “Data and Analytics Strategy in IIoT”


Thought leaders who presented at DevFest II shared their IoT expertise in four topic tracks:
* Developing IoT Solutions for a Connected, Smart, and Autonomous World introduced the three phases of IoT development, as well as an array of Intel IoT developer tools, SDKs, technologies and other resources. 
* Architecting, Integrating and Managing IoT Solutions examined how to enable the full potential of IoT by overcoming security and privacy challenges to make IoT a force for business transformation.
* Data Analytics and Artificial Intelligence looked at how ever-greater amounts of data improves the learning environment and expands the possibilities of edge and cloud analytics.
* Uncovering Real Business Opportunities from the Evolution of IoT highlighted real-world use cases and disruptive new business models, as world’s most forward-thinking companies find IoT applications within their operation[Office1].

A Showcase for the Latest IoT Developer Tools and Products

In addition to training and mentoring opportunities, DevFest provided a forum for leading technology companies to showcase their IoT developer resources, including:
* Arduino* Create, an integrated online platform that enables makers and professional developers to write code, access content, configure boards, and share projects
* UP Squared* Grove IoT Development Kit with simple setup and configuration, pre-installed Ubuntu OS, and expanded I/O for rapid prototyping
* Intel® System Studio 2018 beta, a comprehensive cross-platform tool suite to help you move from prototype to product faster, with optimizing compilers, highly tuned libraries, analyzers, debug tools, custom workflows and code samples. 
* Intel® Secure Device Onboard service vastly accelerates trusted onboarding of IoT devices—from minutes to seconds—with a zero-touch, automated process that begins when the device is powered on and ends when the IoT platform takes control.

A Life Beyond the Event

Intel again thanks all speakers, companies and attendees who helped make Intel Global IoT DevFest a success!   We hope you gathered useful insights into the world of IoT today – and what we can make it in the future.

Was there a presentation you couldn’t attend or a speaker you’d love to hear again?  Catch up on all DevFest highlights anytime you want:  

* Check out the on-demand videos from this event, including the 'welcome' video from Conference Chair, Grace Metri, Internet of Things Community Evangelist at Intel. 
* In case you missed the inaugural event, view encore presentations from the first virtual online conference in June
* Sign up for the Intel® Software Developer Zone Newsletter and stay up-to-date on the latest IoT tools and trends

 

Functional Connectivity of Epileptic Brains: Investigating Connectivity of Epileptic Brain - Week 1 Update

$
0
0

This blog post introduces the utilization of connectivity investigation to epileptic brains. We will show the fundamental steps to process EEG data including pre-processing, applying necessary filters, and perform a basic connectivity extraction from EEG data. The aforementioned steps play a crucial part in the feature extraction process where Multi-Layer Perceptron Neural Network will be used as our main machine learning classification algorithm in this post.

Definition of Epilepsy

Epilepsy is one of the brain disorders characterized by the recurrent unprovoked interruption of brain function, called epileptic seizures. During the epileptic seizures, groups of neurons in the cerebral cortex are being excessively triggered simultaneously resulting in symptoms such as muscle stiffness, muscle spasms and impaired consciousness. The disease is considered a chronic disorder affecting 0.5 – 1% of the entire population. In the United States alone, 1.8% of adults (18 years and older) and 1% of children (aged 0-17), are reported to have epilepsy by the Center for Disease Control and Prevention. Although the symptoms mentioned above are general symptoms, the location, duration and propagation of the seizures vary depending on the individual. Due to the unpredictable occurrence of seizures, the quality of life of epileptic patients might be greatly impacted by this uncertainty. The definition of epilepsy alone makes me aware that, a majority of people with epilepsy are suffering from the disorder. I believe that with the current technology, we would be able to push the boundary of knowledge and contribute to the progress of enhancing therapeutic protocols for all those patients.

Why I Chose Brains and Epilepsy

I started my research about epileptic brains immediately after I joined the Center of Advance Technology and Education (CATE) at Florida International University. My research focuses on exploring the connectivity domain of brains, which in this case is epileptic brains. The main reason that I chose to work with the brain is because of its complexity, where in my opinion, I believe that human brains are the most fascinating system and the most complicated system that we have ever encountered. Therefore, trying to understand how the brain works will always be one of my top priorities in my research career.

The brain can be considered as the command center of our body. Not only does it communicate throughout our entire body by using the nervous system as a main channel, it also communicates within itself between different regions as well. Groups of neurons inside our brain communicate with other groups by the medium of axons and we can capture these activities by using electroencephalogram or EEG.

Why I Chose to use EEG?

Among the neurophysiological techniques, electroencephalogram (EEG) remains the most prevalent and reliable modality to examine brain activities as well as using it as the main diagnosis assessment. EEG recording is simple and inexpensive compared to other neuroimaging studies. EEG captures the electrical activities produced by the neurons in the brain. Due to its high temporal resolution, EEG is considered a suitable tool for identifying synchronization between a pair of signals. Substantial amount of epilepsy diagnosis is done by recording and visualizing EEG. Extracting epileptic characteristics of EEG plays a key role in the disease detection. Consequently, extracting the hidden patterns of EEG may be a beneficial tool to alleviate this complex process of the epilepsy diagnosis.

Image source: http://www.medicalestudy.com

Introducing Brain Connectivity

The study of functional connectivity has received great attention in the field of neuroscience and has yielded promising results in diverse research endeavors. Functional connectivity is defined as a study of the correlation of events occurring in the regions of the cortex. The value of functional connectivity depends on the levels of synchronization between groups of neurons, which can be estimated from EEG recordings by using one of the most promising measures such as coherence measurement. EEG coherence provides the interactions between neural activities in different frequencies across brain regions, which in this case, it will be used to analyze the pattern of interictal epileptiform discharges (IEDs) where it is perceived that the characteristics of different types of IEDs will be distinct from each other creating a pattern that will be used as classification parameters.

Next Steps

Now that we get a glimpse of brain connectivity and how we should apply it to EEG, the next step will be continuing our work by first importing EEG data into our working environment and start our data investigation process.

Continue to: Preprocessing EEG Data - Week 2 Update

Viewing all 1751 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>