Jiesen·maersi (Jason Mars) to create its own Siri and contribution. He is a Professor of computer science at the University of Michigan, working with several researchers at the school, recently developed a number of immediate response to voice commands Assistant, Sirius, as Siri on iPhone. Sirius was then Mars open source, free to share with the world its code.
Software engineers can explore the complexities of modern speech recognition by Sirius, even joining the speech recognition feature in their mobile applications. In the eyes of Mars, this is the beauty of the world. Vera Bradley iPhone 5 Case
But it also has another purpose. Mars also realized that supporting today's Internet large voice of the computer center was not ready for the coming revolution, he hoped that by Sirius to show them how to change. He said: "we want to understand how to build the data center of the future. "
You know, like Siri, Google Now Cortana these digital assistants and Microsoft is not just running on a mobile phone. They run in Computing Center in thousands of machines, but as more and more people around the world use these services, ordinary running machine has been unable to meet their demands. Because it would take up too much space, consuming too much energy. We need far more efficient hardware to do this thing.
Through the open source project Sirius, Mars and colleagues (including a Michigan PhD student named Yunqi Zhang) can show how tools like Siri run in data centers, and ultimately to identify the most suitable hardware running voice services. Such hardware can also be used in other remodeling artificial intelligence tools on the Internet, such as face recognition, driverless car.
Google search to shame
When the test is Sirius, Mars has proved that running with traditional hardware Sirius needed equipment, space and energy is 168 times based on text search engines such as Google. Taking into account the speech recognition is not only mobile wearable device of the future, it is simply not practical. Mars Express: "we're going to hit. "Data center not only occupy space, needs huge amounts of money to build, and can consume huge amounts of energy.
Question is: what hardware to use in place of traditional machine?
This will not only impact on Apple, Google, Microsoft, and many application developers, data center hardware sales company will also be affected, as well as leading chip manufacturers such as Intel and AMD. Mark Papermaster, AMD's Chief Technology Officer, said: "it means a lot for us in the future. "
That's why Sirius project Mars. Apple, Google, and Microsoft knows how to run this new service, but the rest of the world does not know, and they need to know.
Parallel universe Vera Bradley phone cases
From Google's Web search service to most Internet services such as Facebook social network runs on Intel, AMD Server chip (mostly Intel). Question is: How can these CPUs are not suited to run speech recognition services such as Siri because the speech recognition service needs at the same time very small size.
As companies such as Google, Microsoft, Baidu says, these calculations were originally used to dealing with complex digital image processing (GPU) chips and programmable gate arrays for specific tasks (FPGA) chip for better. Google has Google Now uses the GPU to drive similar to the human brain "neural network", part of the Microsoft Bing search engine driven by FPGA functionality.
While Bing does not handle voice, GPU, FPGA can enhance the effectiveness of all network services that need fast, mainly because they don't have to expend too much energy, and does not take up too much space.
Basically, if the GPU and FPGA, people will be able to install more chips on a single machine. Although a single GPU and FPGA chips is not as powerful as a CPU, but greater computational computing tasks into smaller blocks, and then assigned to the GPU and FPGA chips. This is especially attractive in applications such as speech recognition, parallel computing was born to them. Papermaster said "a lot of new services very quickly to filter the vast amounts of information. Due to the repetition of these tasks, can be expedited by GPU and FPGA. "
GPU now, not only is the inevitable choice for speech recognition and other service options based on neural networks. These "deep learning" tools involving services such as face recognition, accurate advertising, eventually they will also help drive the unmanned vehicles and robots. Jeff Dean in charge of Google most of deep learning, Google is now in the mix using the GPU and CPU to run drive about 50 kinds of neural network for Google's Web services.
But Microsoft also proved that FPGA can be another option. Through the open source digital assistant Sirius, jiesen·maersi looking for the most modern data center architecture suitable for Internet service in the future.
Not limited to Apple and Google
Right now the answer is still unclear. But Sirius and Mars at least proved that the GPU and FPGA is a better choice than the CPU. Mars Express: "the future of data center design must include a GPU or FPGA. This could bring at least an order of magnitude improvement. "
He said people can do everything programmatically FPGA, FPGA's efficiency is much higher than the GPU (according to University of Michigan tested 16 times times the CPU performance of the FPGA, GPU is about 10 times). But they need a lot of design work. Google, Apple and Microsoft engineers of the company must be advertise for their programming.
GPU also needs a little more work. When you are using FPGA, the agent software must be customized to fit these chips. But engineers do not need to encode chips. For this reason, GPU is more feasible, especially considering the speech recognition tool will no longer be limited to Apple, Google and Microsoft, will also enter the reluctant hiring chip engineers in the company.
Mars said: "Siri, Cortana and Google Now, and real-time data analysis and the development direction of advanced applications of video technology, the development direction of the industry. "
No matter what this will achieve, will reshape the field of computer chips. Intel has been exploring the FPGA. GPU maker NVIDIA is pushing the wave of deep learning to new heights. AMD to buy ATI GPU manufacturer a few years ago also continued in this field. As Papermaster said, AMD is working with various companies in the industry, in order to build tools that makes it easier for programmers to write software for the GPU.
Considering Facebook and Microsoft and other companies are exploring in the Internet Data Center uses low-power ARM chip, the chip market is bound to change significantly over the next few years. Jiesen·maersi and Sirius projects he aims to show what the future holds. But Sirius is also likely to drive this change. After all, if you are running your Siri, they need their own chips.
via wired
1448 votes
Microsoft Surface Pro 3
Main light Office Sufi series third-generation products to market, is still a familiar shape, compared to the previous generation, reduce body thickness from 10.6mm to 9.1mm, whole body slim. Design, improved Type keyboard Cover, upgraded frame, larger screens, and refers to a traditional laptop. While the weight is only 800g, over the previous weight has been greatly improved, and Microsoft has been emphasizing lighter than MBA. Well if you want to buy, we must first consider whether to accept a resolution of 2160x1440 pixels, 3:2 per cent of 12-inch screen.
View details of the voting >>
https://www.youtube.com/watch?v=y__kWv23Hlo