Video Of Day

Breaking News

Updated—Nvidia Wants To Hold Upwards The Brains Behind The Surveillance Patch (Nvda)

Update below.
Origianal post:

The society only rolled out a $399,000 two-petaflop supercomputer that every trivial totalitarian too his blood brother is going to lust later to run their surveillance-city smart-city information slurping dreams.
The coming municipal information centers volition destination upwardly matching the NSA inwards total storage capacity too NVIDIA wants to move the 1 sifting through it all. More on this downwards the road, for at nowadays here's the beast.
From Hot Hardware:

NVIDIA Unveils Beastly 2 Petaflop DGX-2 AI Supercomputer With 32GB Tesla V100 And NVSwitch Tech (Updated)
Of the over 28,000 attendees at NVIDIA’s GTC 2018 GPU Technology Conference, many converged on the San Jose Convention Center this calendar week to larn nearly advancements inwards AI too Machine Learning that the society would convey to the tabular array for developers, researchers too service providers inwards the field. Today, NVIDIA CEO Jensen Huang took to the phase to unveil a divulge of GPU-powered innovations for Machine Learning, including a novel AI super reckoner too an updated version of the company’s powerful Tesla V100 GPU that at nowadays sports a hefty 32 Gigabytes of on-board HBM2 memory.

Influenza A virus subtype H5N1 follow-on to final year’s DGX-1 AI supercomputer, the novel NVIDIA DGX-2 tin terminate move equipped amongst double the divulge of Tesla V100 32GB processing modules for double the GPU horsepower too a whopping 4 times the amount or retention space, for processing datasets of dramatically larger batch sizes. Again, each Tesla V100 at nowadays sports 32GB of HMB2, where previous generation Tesla V100 was express to 16GB. The additional retention tin terminate afford factors of multiple improvements inwards throughput due to the information beingness stored inwards local retention on the GPU complex, versus having to fetch out of much higher latency arrangement memory, every bit the GPU crunches information iteratively. In addition, NVIDIA also attacked the employment of scalability for its DGX server production past times developing a novel switch cloth for the DGX-2 platform.....MORE
The information sifting is too then fast that information storage companies are starting to supercharge their systems amongst GPU's using the older DGX-1.
From TechTarget's SearchStorage:

Pure Storage AIRI is AI-ready infrastructure that integrates Pure's all-flash FlashBlade NAND storage blades too 4 Nvidia DGX-1 artificial intelligence supercomputers.
Pure Storage is elbowing into AI-based storage amongst FlashBlade, a utilisation example that's a natural progression for the scale-out unstructured array.

The all-flash pioneer this calendar week teamed amongst high-performance GPU specialist Nvidia to unveil Pure Storage AIRI, a preconfigured stack developed to accelerate data-intensive analytics at scale.

AIRI stands for AI-ready infrastructure. The production integrates a unmarried 15-blade Pure Storage FlashBlade array fed past times 4 Nvidia DGX-1 deep learning supercomputers. Connectivity comes from ii remote straight off retention access 100 Gigabit Ethernet switches from Arista Networks.
In this production iteration, Pure uses fifteen midrange 17 TB FlashBlade NAND blades. Pure Storage claims a one-half rack of AIRI compute too storage is equivalent to nearly l measure information pump racks....MORE
Finally, from Tiernan Ray at Barron's Tech Trader:

Nvidia: One Analyst Thinks It’s Decimating Rivals inwards A.I. Chips
Nvidia's CEO Jen-Hsun Huang is taking away the oxygen from competitors inwards A.I., Rosenblatt analyst Hans Mosesmann tells Barron's, past times a combination of chip functioning that's difficult to stand upwardly for too software technology scientific discipline that others can't offer.
The fastest-growing component of chip maker Nvidia’s (NVDA) concern is its “data center” chips production line, driven inwards component past times sales of graphics chips — “GPUs” — that are widely used for artificial intelligence tasks such every bit machine learning.

That sectionalization looks to choose a really brilliant future, according to 1 analyst who attended Nvidia’s annual “GTC” conference final week.

“What Nvidia did amongst their announcements final calendar week was to drive everyone, including Intel (INTC), but also startups, to re-examine their roadmaps,” says Hans Mosesmann of Rosenblatt Securities.

I chatted amongst Mosesmann past times telephone on Friday. Mosesmann, who has a Buy rating on Nvidia stock, too a $300 toll target, foresees the society having something of a lock on the A.I. chip market.
"Nvidia has reset the grade of performance,” he told me.
Nvidia’s information pump concern totaled $606 meg inwards revenue, or 21% of its total, too to a greater extent than than doubled from a yr earlier. (For to a greater extent than details on Nvidia’s revenue trends, come across the company’s presentation on its investor relations Web site.)

Nvidia, inwards Mosesmann's thinking, keeps upping the ante. Not solely turning upwardly functioning of chips, but also redefining the battle past times making it nearly software, too nearly system-level expertise inwards A.I., non only nearly the chip itself:
[Nvidia CEO] Jen-Hsun [Huang] is really clever inwards that he sets the grade of functioning that is nigh impossible for people to conk along upwardly with. It’s classic Nvidia — they acquire to the limits of what they tin terminate perchance exercise inwards damage of procedure too systems that integrate retention too clever switch technology scientific discipline too software too they acquire at a stride that makes it impossible at this phase of the game for anyone to compete.

Everyone has to ask, Where exercise I demand to move inwards procedure technology scientific discipline too inwards functioning to move competitive amongst Nvidia inwards 2019. And exercise I choose a follow-on production inwards 2020? That’s tough enough. Add to that the employment of compatibility you lot volition choose to choose amongst 10 to twenty frameworks [for machine learning.] The solely argue Nvidia has such an wages is that they made the investment inwards CUDA [Nvidia’s software tools].

A lot of the announcements at GTC were non nearly silicon, they were nearly a platform. It was nearly things such every bit taking retention [chips] too putting it on pinnacle of Volta [Nvidia’s processor], too adding to that a switch function. They are taking the game to a higher level, too likely pain about of the system-level guys. Jen-Hsun is making it a bigger game.
An immediate result, Mosesmann believes, is that a lot of A.I. chip startups, companies that include Graphcore too Cerebras,are going to choose a really difficult fourth dimension keeping up.
“He’s destroying these companies,” says Mosesmann of the immature A.I. hopefuls. “These mortal companies choose to acquire dorsum too acquire about other $50 meg [of funding]."

“He's taking all the oxygen out of the room,” says Mosesmann.
For the established competitors such every bit Intel, Mosesmann sees plenty of attempts at A.I. of a abrupt rendered moot.

Intel bought A.I. chip startup Nervana Systems in 2016 for $400 million. I’ve written a bunch nearly how Nervana is becoming Intel’s A.I. focus....MUCH MORE
Update: "'Nvidia's Slightly Terrifying Metropolis Platform Paves the Way for Smarter Cities' (NVDA)"

No comments