Google launches Cloud Scheduler, a managed cron service

Google Cloud is getting a managed cron service for running batch jobs. Cloud Scheduler, as the new service is called, provides all the functionality of the kind of standard command-line cron service you probably love to hate, but with the reliability and ease of use of running a managed service in the cloud.

The targets for Cloud Scheduler jobs can be any HTTP/S endpoints and Google’s own Cloud Pub/Sub topics and App Engine applications. Developers can manage these jobs through a UI in the Google Cloud Console, a command-line interface and through an API.

“Job schedulers like cron are a mainstay of any developer’s arsenal, helping run scheduled tasks and automating system maintenance,” Google product manager Vinod Ramachandran notes in today’s announcement. “But job schedulers have the same challenges as other traditional IT services: the need to manage the underlying infrastructure, operational overhead of manually restarting failed jobs and lack of visibility into a job’s status.”

As Ramachandran also notes, Cloud Scheduler, which is currently in beta, guarantees the delivery of a job to the target, which ensures that important jobs are indeed started and if you’re sending the job to AppEngine or Pub/Sub, those services will also return a success code — or an error code, if things go awry. The company stresses that Cloud Scheduler also makes it easy to automate retries when things go wrong.

Google is obviously not the first company to hit upon this concept. There are a few startups that also offer a similar service, and Google’s competitors like Microsoft also offer comparable tools.

Google provides developers with a free quota of three (3) jobs per month. Additional jobs cost $0.10 per month.

Fantasmo pivots to scooter cameras that keep them off sidewalks

GPS is too inaccurate to tell if a scooter is being driven or parked in off-limits areas. But as scooter startups compete for permits from city governments, they need a way to prove their riders play by the rules. That’s where Fantasmo’s new scooter positioning camera comes in.

The augmented reality mapping startup had been building the Camera Positioning Standard to give self-driving cars, robots and AR games a dynamically updated understanding of the real world around them. But now Fantasmo is focusing on the urgent use case of scooter accountability.

Its camera attaches to personal electric vehicles, captures video and matches that against Fantasmo’s map to reliably identify if a scooter is being illegally ridden on the sidewalk or parked in the middle of the walkway. Scooter companies could make their vehicles beep and slowly lose acceleration where not allowed, issue fines for parking in the wrong spot, notify redistribution teams to move errant vehicles or ban riders who consistently break their terms.

The tech could even make maps of available scooters more precise so you’re not wandering around searching. And scooter companies could use Fantasmo’s data to demonstrate that their riders are the most respectful.

“Scooters are under threat unless they find ways to work with cities to prevent sidewalk riding and make sure they’re parked in places the cities deem appropriate. 2D image capture can be leveraged to build out semantic, 3D maps of cities and provide a hyper-accurate position of the scooter,” says Fantasmo co-founder Jameson Detweiler. “So-called visual positioning is more precise than GPS and has centimeter level accuracy in dense urban environments — a notoriously bad environment for GPS. Visual positioning is accurate enough that a scooter can know when it is in a prohibited zone even if the zone is only as wide as a sidewalk.”

You can see in the video below how GPS can’t tell the difference, while Fantasmo shows green icons when the scooter is on the street and red ones when on sidewalks.

Originally founded in 2014 to build AR games, Fantasmo was started by Detweiler, who’d previously built startup website builder LaunchRock, and electrical engineering PhD Dr. Ryan Measel. Fantasmo has raised $2.2 million in funding, led by TenOneTen Ventures, to build decentralized 3D maps of the world. Instead of expensive LIDAR sensors like for autonomous vehicles, a simple 2D camera with the right software is sufficient for positioning.

So why wouldn’t scooter companies just launch their own camera systems? Well, beyond Fantasmo’s specialized expertise from years working on AR positioning, it benefits from network effect. Each client from across industry verticals contributes data they collect to Fantasmo’s collaborative maps. That means if construction or an event changes a street’s layout, the first Fantasmo camera that comes across it updates everyone else’s maps. An individual mobility startup might end up with less accurate maps while wasting resources far outside their core purpose. Developers and personal vehicle companies that want to work with Fantasmo can apply for beta access on its website.

The vision is to build “a next-generation Open Street Map that gets all the inputs to work together,” Detweiler explains. “Eventually you’ll have self-driving scooters to do redistribution,” he says, rather than having humans load them in trucks and place them where they’ll get rented next. Without super accurate maps, the idea of passenger-less scooters rampaging through cities is terrifying. “There’s definitely a horror movie or three in that concept right there.”

If a more open AR map like Fantasmo’s doesn’t win, we could end up with a tech giant like Google hoarding this data. “I think the crowd of all these devices will be more powerful,” Detweiler concludes. “It might take time, but that network effect would be hard to beat.

A blockchain firm bought asteroid mining company Planetary Resources

Here’s a match made in…I don’t know, somewhere on the blockchain, I guess. Pioneering space startup Planetary Resources was acquired by, of all things, a blockchain firm this week. ConsenSys, a Brooklyn based firm that specializes in all things Ethereum issued an announcement noting that it has snagged the asteroid mining  company.

It’s not entirely clear how the two companies will work together, though ConsenSys founder Joe Lubin (who also helped author Ethereum) did manage to mention “decentralizing space endeavors,” which is certainly on-brand for the head of a blockchain company.

“I admire Planetary Resources for its world class talent, its record of innovation, and for inspiring people across our planet in support of its bold vision for the future,” Lubin said in a statement tied to the news. “Bringing deep space capabilities into the ConsenSys ecosystem reflects our belief in the potential for Ethereum to help humanity craft new societal rule systems through automated trust and guaranteed execution. And it reflects our belief in democratizing and decentralizing space endeavors to unite our species and unlock untapped human potential.”

Lubin also promised to offer up more information in the coming months. Meantime, Planetary Resources CEO Chris Lewicki (formerly of NASA JPL) and General Counsel Brian Israel will both be joining ConsenSys. Here’s what Lewicki had to say about the matter, “I am proud of our team’s extraordinary accomplishments, grateful to our visionary supporters, and delighted to join ConsenSys in building atop our work to expand humanity’s economic sphere of influence into the Solar System.”

Founded in 2010 as Arkyd Astronautics, Planetary Resources was considered a bright light in the world of privatized space companies, with X Prize founder Peter Diamandis on-board as director. Earlier this year, however, the company noted that it was rethinking its approach and making cutbacks after failing to secure its most recent funding round.

Facebook reorganizes Oculus for AR/VR’s long-haul

Facebook is again looking to whip Oculus into shape for its 10-year journey towards making virtual reality mainstream. According to two sources, Facebook reorganized its AR and VR team this week from a divisional structure focused around products to a functional structure focused around technology areas of expertise. While no one was laid off, the change could eliminate redundancies by uniting specialists so they can iterate towards long-term progress rather than being separated into groups dedicated to particular gadgets.

Facebook confirmed the reorg to TechCrunch, with a spokesperson providing this statement: “We made some changes to the AR/VR organization earlier this week. These were internal changes and won’t impact consumers or our partners in the developer community.” Oculus CTO John Carmack and Oculus co-founder/newly-promoted Head of PC VR Nate Mitchell will remain in their leadership positions within VP of AR/VR Andrew ‘Boz’ Bosworth’s hardware wing of the company.

The shift obviously communicates that Facebook believes Oculus could be running more effectively. Organizing the company around areas of expertise rather than broader divisions is probably more appropriate for a moonshot effort that can’t afford redundancies, on the other hand, keeping expertise siloed could isolate new approaches and advancements from reaching other teams. As the company builds out its first full lineup of headsets, there seems to be significant overlap in the tech problems and products bring tackled by those working on mobile and PC products.

TechCrunch reported earlier this week that the company is planning to release a new Rift headset as early as 2019, possibly called the Rift S, which will featured upgraded displays and an inside-out tracking system. The company’s “Rift 2” project, codenamed Caspar, was left behind in the reorganization, a source tells us. We can’t confirm whether any other products or concepts have been shelved.

While an immersive virtual world that users can hang out and communicate in certainly seems to fit Facebook’s broader mission, the company has spent the better part of the past few years deciding how a costly, ambitious venture like Oculus fits into its corporate structure.

First, things went smoothly. The company and its empowered co-founders were building out a developer network and prepping for the launch of their Rift headset after creating a successful partnership with Samsung for the Gear VR. Then, the company’s good fortune turned as the Rift headset was racked by expensive delays and Oculus failed to ship the company’s Touch motion controllers at launch losing some initial ground to HTC. 

By the end of 2016, it was announced that co-founder Brendan Iribe was out as CEO and that the company would be reorganizing around divisions focused on things like PC VR, mobile and content with Xiaomi exec Hugo Barra coming aboard as VP of VR to lead the new effort working directly beneath CEO Mark Zuckerberg. An additional layer of oversight has been built in since then, with Bosworth was put in charge of the company’s consumer hardware ambitions with Oculus as a central pillar. His title is now VP of AR/VR.

The absorption of Oculus deeper into Facebook’s corporate structure was a trend that soon replicated itself as the company looked to rein in the independent teams under a more cohesive vision. The culmination of this was a major executive reshuffle earlier this year that changed the landscape for how divisions within the company were managed.

Now, they’re changing things up even more.

Oculus Go

The new structure sounds like it could coordinate efforts around more general lines like hardware and software allowing insights to flow more intuitively across Facebook’s planned devices.

Given the slow adoption of VR and engineering challenges of AR headsets, which at TechCrunch’s LA conference last month Facebook’s head of AR Ficus Kirkpatrick confirmed it was building, this structure could help Oculus iterate its way to long-term success rather than just getting the next product out the door.

If Facebook is going to beat companies solely focused on AR like Magic Leap, and potential incumbent invaders like Apple if it so chooses, it needs to maximize efficiency. And if it’s going to get both developers and users excited about these next-generation computing platforms, it will have to produce products that make cutting-edge technologies feel unified and accessible. That’s a lot easier when everyone’s not stepping on each other’s virtual shoes.

In a court filing, Edward Snowden says a report critical to an NSA lawsuit is authentic

An unexpected declaration by whistleblower Edward Snowden filed in court this week adds a new twist in a long-running lawsuit against the National Security Agency’s surveillance programs.

The case, filed by the Electronic Frontier Foundation a decade ago, seeks to challenge the government’s alleged illegal and unconstitutional surveillance of Americans, who are largely covered under the Fourth Amendment’s protections against warrantless searches and seizures.

It’s a big step forward for the case, which had stalled largely because the government refused to confirm that a leaked document was authentic or accurate.

News of the surveillance broke in 2006 when an AT&T technician Mark Klein revealed that the NSA was tapping into AT&T’s network backbone. He alleged that a secret, locked room — dubbed Room 641A — in an AT&T facility in San Francisco where he worked was one of many around the U.S. used by the government to monitor communications — domestic and overseas. President George W. Bush authorized the NSA to secretly wiretap Americans’ communications shortly after the September 11 terrorist attacks in 2001.

Much of the EFF’s complaint relied on Klein’s testimony until 2013, when Snowden, a former NSA contractor, came forward with new revelations that described and detailed the vast scope of the U.S. government’s surveillance capabilities, which included participation from other phone giants — including Verizon (TechCrunch’s parent company).

Snowden’s signed declaration, filed on October 31, confirms that one of the documents he leaked, which the EFF relied heavily on for its case, is an authentic draft document written by the then-NSA inspector general in 2009, which exposed concerns about the legality of the Bush’s warrantless surveillance program — Stellar Wind — particularly the collection of bulk email records on Americans.

The draft top-secret document was never published, and the NSA had refused to confirm or deny the authenticity of the 2009 inspector general report, ST-09-0002 — despite that it’s been public for many years.

Snowden, as one of the few former NSA staffers who can speak more freely than former government employees about the agency’s surveillance, confirmed that the document is “authentic.”

“I read its contents carefully during my employment,” he said in his declaration. “I have a specific and strong recollection of this document because it indicated to me that the government had been conducting illegal surveillance.”

Snowden left his home in Hawaii for Hong Kong in 2013 when he gave tens of thousand of documents to reporters. His passport was cancelled as he travelled to Moscow to take another onward flight. He later claimed political asylum in Russia, where he currently lives with his partner.

U.S. prosecutors charged Snowden with espionage.

EFF executive director Cindy Cohn said that the NSA’s refusal to authenticate the leaked documents “is just another step in its practice of falling back on weak technicalities to prevent the public courts from ruling on whether our Constitution allows this kind of mass surveillance of hundreds of millions of nonsuspect people.”

The EFF said in another filing that the draft report “further confirms” the participation of phone companies in the government’s surveillance programs.

The case continues — though, a court hearing has not been set.

Amazon reportedly in ‘advanced talks’ to open HQ2 in Virginia

These sorts of major decision no doubt take some time. And, of course, Amazon is clearly milking the decision making process for all it’s worth as cities across the States roll out the red carpet. According to a new report from The Washington Post, however, the big news surrounding where the company opens its second headquarters may come sooner than later.

The Bezos-owned paper is reporting that the retail giant is in “advanced talks” with Crystal City, a neighborhood in North Virginia that lies just south of the Washington D.C. Those conversations are reportedly further along and “more detailed” than any of the other Amazon has had with fellow top contenders. Nearby metro stops and proximity to a major airport are all requirements that are fulfilled by Crystal City.

Among the topics broached during the talks are questions around building capacity and how quickly the company can start moving in. In fact, a local top real estate developer has apparently unlisted some of its buildings in the past month, in anticipation of an announcement. Buildings for the initial move of hundreds of employees could be occupied by Amazon employees within nine months.

No specifics on when exactly the announcement would arrive, though the paper notes that it’s being held until after the midterm elections, meaning it could potentially occur as soon as Wednesday.

The iPhone is reportedly getting 5G in 2020

The first 5G phones are set to start arriving next year. Motorola plans to bring next-gen connectivity via a Mod for the Z3, and companies like LG and OnePlus have promised to deliver the tech baked into handsets at some point in 2019. iPhone users, on the other hand, may have to wait a bit longer.

The technology is, of course, an inevitability for Apple (along with everyone else, really), so it’s just a question of when. A new report from Fast Company (via the Verge) puts the timing around a year and half out.

The “source with knowledge of Apple’s plans” put the 5G iPhone’s arrival at some point in 2020, with Intel supplying the tech this time out. Apparently Apple and Intel are going through a bit of a rough patch of late, courtesy of heat/battery issues with the 8060 5G modem. Of course, things aren’t rough enough for the company to hit up Qualcomm again.

Given the on-going battle between the two companies, that’s probably a bridge too far. Instead, Apple’s holding out for Intel’s 8161 chip. 5G presents a solid opportunity for Intel to regain some of the substantial ground it ceded to Qualcomm in the mobile market the last time out.

A long and winding road to new copyright legislation

Dave Davis
Contributor

Dave Davis joined Copyright Clearance Center in 1994 and currently serves as a research analyst. He previously held directorships in both public libraries and corporate libraries and earned joint master’s degrees in Library and Information Sciences and Medieval European History from Catholic University of America.
More posts by this contributor

Back in May, as part of a settlement, Spotify agreed to pay more than $112 million to clean up some copyright problems. Even for a service with millions of users, that had to leave a mark. No one wants to be dragged into court all the time, not even bold, disruptive technology start-ups.

On October 11th, the President signed the Hatch-Goodlatte Music Modernization Act (the “Act”, or “MMA”). The MMA goes back, legislatively, to at least 2013, when Chairman Goodlatte (R-VA) announced that, as Chairman of the House Judiciary Committee, he planned to conduct a “comprehensive” review of issues in US copyright law. Ranking Member Jerry Nadler (D-NY) was also deeply involved in this process, as were Senators Hatch (R-UT) Leahy (D-VT), and Wyden (D-OR). But this legislation didn’t fall from the sky; far from it.

After many hearings, several “roadshow” panels around the country, and a couple of elections, in early 2018 Goodlatte announced his intent to move forward on addressing several looming issues in music copyright before his planned retirement from Congress at the end of his current term (January 2019).  With that deadline in place, the push was on, and through the spring and summer, the House Judiciary Committee and their colleagues in the Senate worked to complete the text of the legislation and move it through to process. By late September, the House and Senate versions had been reconciled and the bill moved to the President’s desk.

What’s all this about streaming?

As enacted, the Act instantiates several changes to music copyright in the US, especially as regards streaming music services. What does “streaming” refer to in this context? Basically, it occurs when a provider makes music available to listeners, over the internet, without creating a downloadable or storable copy: “Streaming differs from downloads in that no copy of the music is saved to your hard drive.”

“It’s all about the Benjamins.”

One part, by far the largest change in terms of money, provides that a new royalty regime be created for digital streaming of musical works, e.g. by services like Spotify and Apple Music. Pre-1972 recordings — and the creators involved in making them (including, for the first time, for audio engineers, studio mixers and record producers) — are also brought under this royalty umbrella.

These are significant, generally beneficial results for a piece of legislation. But to make this revenue bounty fully effective, a to-be-created licensing entity will have to be set up with the ability to first collect, and then distribute, the money. Think “ASCAP/BMI for streaming.” This new non-profit will be the first such “collective licensing” copyright organization set up in the US in quite some time.

Collective Licensing: It’s not “Money for Nothing”, right?

What do we mean by “collective licensing” in this context, and how will this new organization be created and organized to engage in it? Collective licensing is primarily an economically efficient mechanism for (A) gathering up monies due for certain uses of works under copyright– in this case, digital streaming of musical recordings, and (B) distributing the royalty checks back to the rights-holding parties ( e.g. recording artists, their estates in some cases, and record labels).  Generally speaking, in collective licensing:

 “…rights holders collect money that would otherwise be in tiny little bits that they could not afford to collect, and in that way they are able to protect their copyright rights. On the flip side, substantial users of lots of other people’s copyrighted materials are prepared to pay for it, as long as the transaction costs are not extreme.”

—Fred Haber, VP and Corporate Counsel, Copyright Clearance Center

The Act envisions the new organization as setting up and implementing a new, extensive —and, publicly accessible —database of musical works and the rights attached to them. Nothing quite like this is currently available, although resources like SONY’s Gracenote suggest a good start along those lines. After it is set up and the initial database has a sufficient number of records, the new collective licensing agency will then get down to the business of offering licenses:

“…a blanket statutory license administered by a nonprofit mechanical licensing collective. This collective will collect and distribute royalties, work to identify songs and their owners for payment, and maintain a comprehensive, publicly accessible database for music ownership information.”

— Regan A. Smith, General Counsel and Associate Register of Copyrights

(AP Photo) The Liverpool beat group The Beatles, with John Lennon, Paul McCartney, George Harrison and Ringo Starr, take it easy resting their feet on a table, during a break in rehearsals for the Royal variety show at the Prince of Wales Theater, London, England, November 4, 1963. (AP Photo)

You “Can’t Buy Me Love”, so who is all this going to benefit?

In theory, the listening public should be the primary beneficiary. More music available through digital streaming services means more exposure —and potentially more money —for recording artists. For students of music, the new database of recorded works and licenses will serve to clarify who is (or was) responsible for what. Another public benefit will be fewer actions on digital streaming issues clogging up the courts.

There’s an interesting wrinkle in the Act providing for the otherwise authorized use of “orphaned” musical works such that these can now be played in library or archival (i.e. non-profit) contexts. “Orphan works” are those which may still protected under copyright, but for which the legitimate rights holders are unknown, and, sometimes, undiscoverable. This is the first implementation of orphan works authorization in US copyright law.  Cultural services – like Open Culture – can look forward to being able to stream more musical works without incurring risk or hindrance (provided that the proper forms are filled out) and this implies that some great music is now more likely to find new audiences and thereby be preserved for posterity. Even the Electronic Frontier Foundation (EFF), generally no great fan of new copyright legislation, finds something to like in the Act.

In the land of copyright wonks, and in another line of infringement suits, this resolution of the copyright status of musical recordings released before 1972 seems, in my opinion, fair and workable. In order to accomplish that, the Act also had to address the matter of the duration of these new copyright protections, which is always (post-1998) a touchy subject:

  • For recordings first published before 1923, the additional time period ends on December 31, 2021.
  • For recordings created between 1923-1946, the additional time period is 5 years after the general 95-year term.
  • For recordings created between 1947-1956, the additional time period is 15 years after the general 95-year term.
  • For works first published between 1957-February 15, 1972 the additional time period ends on February 15, 2067.

(Source: US Copyright Office)

 (Photo by Theo Wargo/Getty Images for Live Nation)

Money (That’s What I Want – and lots and lots of listeners, too.)

For the digital music services themselves, this statutory or ‘blanket’ license arrangement should mean fewer infringement actions being brought; this might even help their prospects for investment and encourage  new and more innovative services to come into the mix.

“And, in The End…”

This new legislation, now the law of the land, extends the history of American copyright law in new and substantial ways. Its actual implementation is only now beginning. Although five years might seem like a lifetime in popular culture, in politics it amounts to several eons. And let’s not lose sight of the fact that the industry got over its perceived short-term self-interests enough, this time, to agree to support something that Congress could pass. That’s rare enough to take note of and applaud.

This law lacks perfection, as all laws do. The licensing regime it envisions will not satisfy everyone, but every constituent, every stakeholder, got something. From the perspective of right now, chances seem good that, a few years from now, the achievement of the Hatch-Goodlatte Music Modernization Act will be viewed as a net positive for creators of music, for the distributors of music, for scholars, fans of ‘open culture’, and for the listening public. In copyright, you can’t do better than that.

Amazon is letting users choose the day packages are delivered

Amazon Day looks like one of those options you’ll wonder how you ever managed to live without. The new feature, which is currently being tested for select Prime users, offers up a choice of deliver day during the check out process.

In addition to the standard options (one and two day, et al), most two-day delivery items will also come with the option to tick the day of the week you want them to show up at your front door. It’s a potential life (or at least package) saver for those of us who live in tricky apartment buildings and aren’t able to be home all of the time.

Amazon confirmed the feature with CNET this week, writing in a statement, “We’re excited to be testing a new service aimed at making the delivery experience more convenient for customers.”

As to why the option hasn’t been available until now, one imagines every new moving part further complicates the already complex world of shipping logistics. No word on when the rest of us will get access to the feature, but the company started testing it on a select group late this week — just in time, it seems, for the Black Friday shipocalypse.