Former Uber site reliability engineer Susan Fowler accused the company of rampant sexual harassment and human resources negligence in a blog post published today.

Its the latest in a series of eventsthat point to serious questions about Uberscompany culture.

Fowler claims that on her first day out of training, she was solicited for sex by a superior onan internal company chat thread. She then immediately captured screenshots of the messages and sent them to Ubers human resources department. In a healthy organization, such a problem would have been resolved quickly. ButFowler alleges that the harassment only continued, preventing her from moving up within the company.

Upper management told me that he was a high performer and they wouldnt feel comfortable punishing him for what was probably just an innocent mistake on his part, explained Fowler in her post.

Atthis point,Fowler says in her post that she was given a choice of remaining on the team and accepting, a poor performance review, or moving to a different team.

I was then told that I had to make a choice: (i) I could either go and find another team and then never have to interact with this man again, or (ii) I could stay on the team, but I would have to understand that he would most likely give me a poor performance review when review time came around, and there was nothing they could do about that, further explained Fowler.

Though shedidnt want to leave the role she felt she was best prepared to fill, she switched teams. Work continued, and whileFowler had settled into the new role she regularlyhad conversations with female employees who shared similar stories about HR negligence, even citing unacceptable experiences with the same superior who solicited her. Along with a number of her colleagues, Fowler met once again with HR to make the point that the experiences of harassment were epidemic. Fowler then says thatUber insistedthat the manager had only been accused of a single offense.

Amidchaotic internal politics,Fowler attemptedto transferto a different department, but the company blocked her request. Citing strong performance, she couldnt understand why her request had been denied.

I was told that performance problems arent always something that has to do with work, but sometimes can be about things outside of work or your personal life,’ addedFowler in her post.

She ultimately decided to stay in the same roleuntil her next performance review. But the frustration continued with a second reassignment rejection and a further explanation that her review had been changed after the fact, and that she didnt show signs of an upward career trajectory. As a result, she was shut out of a company-sponsored Stanford computer science graduate program for high-achievers.

Aside from these claims,Fowler also describes in her post a culture of pervasive sexism telling the story of an employee who refused to order jackets in womens sizing because they cost more. No matter how many complaints she brought forth, HR insinuatedthat shewas the common denominator in all of her complaints. Fowler says she wasthreatened andintimidated in an effort to stop her fromreporting transgressions to HR.

In response to Fowlers post,Uber CEOTravisKalanick promisedto investigate the claims. In a statement to Axios,Kalanickmade a point to draw a dichotomy between the accused behavior and what the CEO believes is core to the companys culture:

I have just read Susan Fowlers blog. What she describes is abhorrent and against everything Uber stands for and believes in. Its the first time this has come to my attention so I have instructed Liane Hornsey our new Chief Human Resources Officer to conduct an urgent investigation into these allegations. We seek to make Uber a just workplace and there can be absolutely no place for this kind of behavior at Uber and anyone who behaves this way or thinks this is OK will be fired.

Uber board member and media mogul Arianna Huffington said in a tweet that she would conduct an independent investigation into the matter. Huffington even released her email address in an effort to make it easier for those with information to come forward.

Sexual harassment is rampant in Silicon Valley, and the worst part is that most of it goes undocumented. If true, Ubers actions to thwartFowlers efforts to report the repeated harassment paint a horrifying picture of the companys internal culture.

Uber is no stranger to being in the negative spotlight when it comes to company culture not just with interpersonal relationships, but in its bigger business model and how it interfaces in the competitive environment for transportation services. In 2014, one of its senior executives (who is still at the company)tolda room full of journalists that Uber runsopposition research on its critics. One of the critics singled out had been very outspoken (along with many others) about how Uber does not take passenger safety seriously enough.

Uberhas, in fact, been the subject of specific incidents involving passenger safety, and, on a wider competitive level, its been accused and occasionally banned for its practices in specific markets. Other accusations involve privacy violations over the access of customer data (some of which have since been settled, some of which still crop up today).

We still dont know the number of female engineers at Uber because the company hasnt been transparent about its hiring Jesse Jackson has made it his priority to change this. But even ifKalanick werentcomplicit,Fowlers experience could speak to how Uber values employee performance with respect to ethics and decency.

We have reached out to Uber and CEO TravisKalanick and will update this post when we hear back.

Source article viahttps://techcrunch.com

Google is releasing a new TensorFlow object detection API to make it easier for developers and researchers to identify objects within images. Google is trying to offer the best of simplicity and performance the models being released today have performed well in benchmarking and have become regularly used in research.

The handful of models included in the detection API include heavy duty inception-based convolutional neural networks and streamlined models designed to operate on less sophisticated machines a MobileNets single shot detector comes optimized to run in real-time on a smartphone.

Earlier this week Google announced its MobileNets family of lightweight computer vision models. These models can handle tasks like object detection, facial recognition and landmark recognition.

Todays smartphones dont possess the computational resources of larger scale desktop and server-based setups, leaving developers with two options. Machine learning models can run in the cloud, but that adds latency and requires an internet connection non-starters for a lot of common use cases. The alternative approach is simplifying the models themselves, making a trade-off in the interest of more ubiquitous deployment.

Google, Facebook and Apple have been pouring resources into these mobile models. Last fall, Facebook announced its Caffe2Go framework for building models to run on smartphones the first big implementation of this was Facebooks Style Transfer. This spring at I/O, Google released TensorFlow lite, its version of a streamlined machine learning framework. And most recently at WWDC, Apple pushed out CoreML, its attempt to reduce the difficulty of running machine learning models on iOS devices.

Of course Googles public cloud offerings give it differentiated positioning with respect to both Facebook and Apple, and its not new to delivering computer vision services at scale vis–vis its Cloud Vision API.

Todays TensorFlow object detection API can be found here. Google wants to make it extra easy to play with and implement so the entire kit comes prepackaged with weights and a Jupyter notebook.

Source article viahttps://techcrunch.com

Vision Mercedes-Maybach 6 concept is nearly 6 meters long.
Image: Mercedes-Benz

People seem to have forgotten that Mercedes resurrected the Maybach brand as an all-new suffix to Mercedes’ most luxurious models, like the Mercedes-Maybach S600. To remedy that, it’s created a new luxury car of monumental proportions: the Vision Mercedes-Maybach 6 concept.

You might not be able to tell from the renderings, but it is truly monumental. To emphasize that point, let me give you a few of its specs straight away. First off, it’s 18-feet-eight-inches long (that’s almost 6 meters long, hence the 6 in its moniker).

Despite its gargantuan size, its all-electric, all-wheel drive powertrain will push it from 0 to 62 mph in less than four seconds on the way to its electronically limited top speed of 155 mph. Now, it can accelerate that quickly thanks to its 80-kilowatt-hour battery pack that puts out 738 horsepower.

Image: Mercedes-Benz

Although it’s big and powerful, it’s still relatively efficient, going by the U.S. Environmental Protection Agency efficiency standards, it’s rated at a 200-mile range on a single charge. What’s more, thanks to fast-charging tech, it can receive charge that lasts for 62 miles of driving in just five minutes. The Vision Mercedes-Maybach 6 includes both wireless inductive charging and wired charging.

To access its “technoid” (Mercedes’ word, not mine) interior, occupants flip open the gullwing doors and drop into the coupe’s luxurious and techie cabin.

There, they’ll find a blend of old-world luxury, like open-pore elm wood floors and soft, quilted leather. These classic features are juxtaposed by high-tech highlights like a windshield that doubles as both a window and transparent digital display. The Vision Mercedes-Maybach 6 projects driving data and navigation information onto the windshield. And it’s operated with gesture controls.

The tech isn’t just functional, designers added some fashion, too. The transparent center tunnel in between the driver and passenger projects a visual representation of the flow of electricity from the powertrain to the wheels. The harder the driver pushes the accelerator pedal, the more energy is shown flowing. Granted, it’s just a digital approximation. But it’s still a cool idea.

Of course, such a high-tech car is also autonomous. With a push of a button, the driver can designate driving duties to the car. After all, there’s nothing more luxurious than having a computer do the driving for you.

Image: Mercedes-Benz

Intriguingly, the new Vision Mercedes-Maybach 6 has the same wheelbase as the Vision Tokyo concept and wheels inspired by the brand’s Concept IAA. This is no coincidence. These three vehicles likely show a realistic vision of Mercedes’ future vehicle plans, from a self-driving lounge to an efficient, morphing luxury sedan to an extremely luxurious sports EV coupe.

With the Vision Mercedes-Maybach 6 concept, Mercedes proves that although the future of mobility will be governed by automated driving and extremely efficient designs, it doesn’t mean it has to be boring.

Source article viahttp://mashable.com/

On Saturday morning, the white stone buildings on UC Berkeleys campus radiated with unfiltered sunshine. The sky was blue, the campanile was chiming. But instead of enjoying the beautiful day, 200 adults had willingly sardined themselves into a fluorescent-lit room in the bowels of Doe Library to rescue federal climate data.

Like similar groups across the country—in more than 20 cities—they believe that the Trump administration might want to disappear this data down a memory hole. So these hackers, scientists, and students are collecting it to save outside government servers.

But now theyre going even further. Groups like DataRefuge and the Environmental Data and Governance Initiative, which organized the Berkeley hackathon to collect data from NASAs earth sciences programs and the Department of Energy, are doing more than archiving. Diehard coders are building robust systems to monitor ongoing changes to government websites. And theyre keeping track of whats been removedto learn exactly when the pruning began.

Tag It, Bag It

The data collection is methodical, mostly. About half the group immediately sets web crawlers on easily-copied government pages, sending their text to the Internet Archive, a digital library made up of hundreds of billions of snapshots of webpages. They tag more data-intensive projectspages with lots of links, databases, and interactive graphicsfor the other group. Called baggers, these coders write custom scripts to scrape complicated data sets from the sprawling, patched-together federal websites.

Its not easy. All these systems were written piecemeal over the course of 30 years. Theres no coherent philosophy to providing data on these websites, says Daniel Roesler, chief technology officer at UtilityAPI and one of the volunteer guides for the Berkeley bagger group.

One coder who goes by Tek ran into a wall trying to download multi-satellite precipitation data from NASAs Goddard Space Flight Center. Starting in August, access to Goddard Earth Science Data required a login. But with a bit of totally legal digging around the site (DataRefuge prohibits outright hacking), Tek found a buried link to the old FTP server. He clicked and started downloading. By the end of the day he had data for all of 2016 and some of 2015. It would take at least another 24 hours to finish.

The non-coders hit dead-ends too. Throughout the morning they racked up 404 Page not found errors across NASAs Earth Observing System website. And they more than once ran across empty databases, like the Global Change Data Centers reports archive and one of NASAs atmospheric CO2 datasets.

And this is where the real problem lies. They don’t know when or why this data disappeared from the web (or if anyone backed it up first). Scientists who understand it better will have to go back and take a look. But meantime, DataRefuge and EDGI understand that they need to be monitoring those changes and deletions. Thats more work than a human could do.

So theyre building software that can do it automatically.

Future Farming

Later that afternoon, two dozen or so of the most advanced software builders gathered around whiteboards, sketching out tools theyll need. They worked out filters to separate mundane updates from major shake-ups, and explored blockchain-like systems to build auditable ledgers of alterations. Basically its an issue of what engineers call version control—how do you know if something has changed? How do you know if you have the latest? How do you keep track of the old stuff?

There wasnt enough time for anyone to start actually writing code, but a handful of volunteers signed on to build out tools. Thats where DataRefuge and EDGI organizers really envision their movement goinga vast decentralized network from all 50 states and Canada. Some volunteers can code tracking software from home. And others can simply archive a little bit every day.

By the end of the day, the group had collectively loaded 8,404 NASA and DOE webpages onto the Internet Archive, effectively covering the entirety of NASAs earth science efforts. Theyd also built backdoors in to download 25 gigabytes from 101 public datasets, and were expecting even more to come in as scripts on some of the larger datasets (like Teks) finished running. But even as they celebrated over pints of beer at a pub on Euclid Street, the mood was somber.

There was still so much work to do. Climate change data is just the tip of the iceberg, says Eric Kansa, an anthropologist who manages archaeological data archiving for the non-profit group Open Context. There are a huge number of other datasets being threatened with cultural, historical, sociological information. A panicked friend at the National Parks Service had tipped him off to a huge data portal that contains everything from park visitation stats to GIS boundaries to inventories of species. While he sat at the bar, his computer ran scripts to pull out a list of everything in the portal. When its done, hell start working his way through each quirky dataset.

UPDATE 5:00pm Eastern, 2/15/17: Phrasing in this story has been updated to clarify when changes were made to federal websites. Some data is missing, but it is still unclear when that data was removed.

Source article viahttp://www.wired.com/

It was December 2012, and Doug Burger was standing in front of Steve Ballmer, trying to predict the future.

Ballmer, the big, bald, boisterous CEO of Microsoft, sat in the lecture room on the ground floor of Building 99, home base for the companys blue-sky R&D lab just outside Seattle. The tables curved around the outside of the room in a U-shape, and Ballmer was surrounded by his top lieutenants, his laptop open. Burger, a computer chip researcher who had joined the company four years earlier, was pitching a new idea to the execs. He called it Project Catapult.

The prototype was a dedicated box with six FPGAs, shared by a rack full of servers. If the box went on the frizz, or if the machines needed more than six FPGAs—increasingly likely given the complexity of the machine learning models—all those machines were out of luck. Bings engineers hated it. They were right, Larus says.

So Burgers team spent many more months building a second prototype. This one was a circuit board that plugged into each server and included only one FPGA. But it also connected to all the other FPGA boards on all the other servers, creating a giant pool of programmable chips that any Bing machine could tap into.

That was the prototype that got Qi Lu on board. He gave Burger the money to build and test over 1,600 servers equipped with FPGAs. The team spent six months building the hardware with help from manufacturers in China and Taiwan, and they installed the first rack in an experimental data center on the Microsoft campus. Then, one night, the fire suppression system went off by accident. They spent three days getting the rack back in shape—but it still worked.

Over several months in 2013 and 2014, the test showed that Bings “decision tree” machine-learning algorithms ran about 40 times faster with the new chips. By the summer of 2014, Microsoft was publicly saying it would soon move this hardware into its live Bing data centers. And then the company put the brakes on.

Searching for More Than Bing

Bing dominated Microsoft’s online ambitions in the early part of the decade, but by 2015 the company had two other massive online services: the business productivity suite Office 365 and the cloud computing service Microsoft Azure. And like all of their competitors, Microsoft executives realized that the only efficient way of running a growing online empire is to run all services on the same foundation. If Project Catapult was going to transform Microsoft, it couldnt be exclusive to Bing. It had to work inside Azure and Office 365, too.

The problem was, Azure executives didn’t care about accelerating machine learning. They needed help with networking. The traffic bouncing around Azure’s data centers was growing so fast, the service’s CPUs couldn’t keep pace. Eventually, people like Mark Russinovich, the chief architect on Azure, saw that Catapult could help with this too—but not the way it was designed for Bing. His team needed programmable chips right where each server connected to the primary network, so they could process all that traffic before it even got to the server.

The first prototype of the FPGA architecture was a single box shared by a rack of servers (Version 0). Then the team switched to giving individual servers their own FPGAs (Version 1). And then they put the chips between the servers and the overall network (Version 2).WIRED

So the FPGA gang had to rebuild the hardware again. With this third prototype, the chips would sit at the edge of each server, plugging directly into the network, while still creating pool of FPGAs that was available for any machine to tap into. That started to look like something that would work for Office 365, too. Project Catapult was ready to go live at last.

Larus describes the many redesigns as an extended nightmare—not because they had to build a new hardware, but because they had to reprogram the FPGAs every time. That is just horrible, much worse than programming software, he says. Much more difficult to write. Much more difficult to get correct. It’s finicky work, like trying to change tiny logic gates on the chip.

Now that the final hardware is in place, Microsoft faces that same challenge every time it reprograms these chips. Its a very different way of seeing the world, of thinking about the world, Larus says. But the Catapult hardware costs less than 30 percent of everything else in the server, consumes less than 10 percent of the power, and processes data twice as fast as the company could without it.

The rollout is massive. Microsoft Azure uses these programmable chips to route data. On Bing, which an estimated 20 percent of the worldwide search market on desktop machines and about 6 percent on mobile phones, the chips are facilitating the move to the new breed of AI: deep neural nets. And according to one Microsoft employee, Office 365 is moving toward using FPGAs for encryption and compression as well as machine learning—for all of its 23.1 million users. Eventually, Burger says, these chips will power all Microsoft services.

Wait—This Actually Works?

It still stuns me, says Peter Lee, that we got the company to do this. Lee oversees an organization inside Microsoft Research called NExT, short for New Experiences and Technologies. After taking over as CEO, Nadella personally pushed for the creation of this new organization, and it represents a significant shift from the 10-year reign of Ballmer. It aims to foster research that can see the light of day sooner rather than later—that can change the course of Microsoft now rather than years from now. Project Catapult is a prime example. And it is part of a much larger change across the industry. The leaps ahead, Burger says, are coming from non-CPU technologies.

Peter Lee. Clayton Cotterell for WIRED

All the Internet giants, including Microsoft, now supplement their CPUs with graphics processing units, chips designed to render images for games and other highly visual applications. When these companies train their neural networks to, for example, recognize faces in photos—feeding in millions and millions of pictures—GPUs handle much of the calculation. Some giants like Microsoft are also using alternative silicon to execute their neural networks after training. And even though it’s crazily expensive to custom-build chips, Google has gone so far as to design its own processor for executing neural nets, the tensor processing unit.

With its TPUs, Google sacrifices long-term flexibility for speed. It wants to, say, eliminate any delay when recognizing commands spoken into smartphones. The trouble is that if its neural networking models change, Google must build a new chip. But with FPGAs, Microsoft is playing a longer game. Though an FPGA isn’t as fast as Google’s custom build, Microsoft can reprogram the silicon as needs change. The company can reprogram not only for new AI models, but for just about any task. And if one of those designs seems likely to be useful for years to come, Microsoft can always take the FPGA programming and build a dedicated chip.

A newer version of the final hardware, V2, a card that slots into the end of each Microsoft server and connects directly to the network. Clayton Cotterell for WIRED

Microsofts services are so large, and they use so many FPGAs, that theyre shifting the worldwide chip market. The FPGAs come from a company called Altera, and Intel executive vice president Diane Bryant tells me that Microsoft is why Intel acquired Altera last summer—a deal worth $16.7 billion, the largest acquisition in the history of the largest chipmaker on Earth. By 2020, she says, a third of all servers inside all the major cloud computing companies will include FPGAs.

It’s a typical tangle of tech acronyms. CPUs. GPUs. TPUs. FPGAs. But it’s the subtext that matters. With cloud computing, companies like Microsoft and Google and Amazon are driving so much of the world’s technology that those alternative chips will drive the wider universe of apps and online services. Lee says that Project Catapult will allow Microsoft to continue expanding the powers of its global supercomputer until the year 2030. After that, he says, the company can move toward quantum computing.

Later, when we talk on the phone, Nadella tells me much the same thing. Theyre reading from the same Microsoft script, touting a quantum-enabled future of ultrafast computers. Considering how hard it is to build a quantum machine, this seems like a pipe dream. But just a few years ago, so did Project Catapult.

Correction: This story originally implied that the Hololens headset was part of Microsoft’s NExT organization. It was not.

Source article viahttp://www.wired.com/

Image: CBS BROADCASTING

Amazon just quietly updated one of the most important parts of the Echo in a move that is sure to delight Star Trek fans everywhere.

The company added “computer” to the list of supported wake words for its Echo devices no doubt a reference to the powerful voice-activated computer onboard the Starship Enterprise.

It looks like Amazon has been rolling out the update slowly over the last few days, though the company hasn’t said much about it beyond an update to its support page, which details how to change it via the Alexa app.

While Amazon has yet to officially confirm the origins of the new name, executives at the company have previously said the Enterprise’s all-knowing computer served as the original inspiration for what eventually became Alexa. Amazon CEO Jeff Bezos is also a lifelong Star Trek fan he was even given a cameo in Justin Lins Star Trek Beyond.

“Our vision is to create a voice-controlled computer in the cloud Alexa that can do exactly what the Star Trek computer did,” David Limp, the senior vice president at Amazon who oversees Alexa, said during an appearance at Fortune’s Brainstorm conference.

However, as Mashable noted previously, the company eventually opted for the name “Alexa” to serve as the primary wake word, though users could also change it to “Amazon” or “Echo.”

So while Amazon is still a long way off from realizing its goal of an Echo as powerful as the mythical Star Trek computer, fans at least have a new way to geek out over Alexa’s Trekkie-inspired roots.

h/t: Verge

Source article viahttp://mashable.com/

Seize Your Moment

PC Support Today free computer repair monitoring is peace of mind!

We provide computer repair pc support with a human touch. Since 2006, PC Support Today provides computer repair and monitoring, remote support, depot repair and on-site visit near you.

Get Started

Rocket

Built for fast growing business.

$FREE

  • Feature 1
  • Feature 2
  • Feature 3
  • Feature 4
  • Feature 5
  • Feature 6

FREE NOW

Starter

Perfect for small companies.

$5.00

  • Feature 1
  • Feature 2
  • Feature 3
  • Feature 4
  • Feature 5
  • Feature 6

Purchase

Elite

Take your business to the next level.

$10.00

  • Feature 1
  • Feature 2
  • Feature 3
  • Feature 4
  • Feature 5
  • Feature 6

Purchase