To fight the scourge of open offices, ROOM sells rooms

Noisy open offices don’t foster collaboration, they kill it, according to a Harvard study that found the less-private floor plan led to a 73 percent drop in face-to-face interaction between employees and a rise in emailing. The problem is plenty of young companies and big corporations have already bought into the open office fad. But […]

Noisy open offices don’t foster collaboration, they kill it, according to a Harvard study that found the less-private floor plan led to a 73 percent drop in face-to-face interaction between employees and a rise in emailing. The problem is plenty of young companies and big corporations have already bought into the open office fad. But a new startup called ROOM is building a prefabricated, self-assembled solution. It’s the Ikea of office phone booths.

The $3495 ROOM One is a sound-proofed, ventilated, powered booth that can be built in new or existing offices to give employees a place to take a video call or get some uninterrupted flow time to focus on work. For comparison, ROOM co-founder Morten Meisner-Jensen says “Most phone booths are $8,000 to $12,000. The cheapest competitor to us is $6,000 — almost twice as much.” Though booths start at $4,500 from TalkBox and $3,995 from Zenbooth, they tack on $1,250 and $1,650 for shipping while ROOM ships for free. They’re all dividing the market of dividing offices.

The idea might seem simple, but the booths could save businesses a ton of money on lost productivity, recruitment, and retention if it keeps employees from going crazy amidst sales call cacophony. Less than a year after launch, ROOM has hit a $10 million revenue run rate thanks to 200 clients ranging from startups to Salesforce, Nike, NASA, and JP Morgan. That’s attracted a $2 million seed round from Slow Ventures that adds to angel funding from Flexport CEO Ryan Petersen. “I am really excited about it since it is probably the largest revenue generating company Slow has seen at the time of our initial Seed stage investment” says partner Kevin Colleran.

“It’s not called ROOM because we build rooms” Meisner-Jensen tells me. “It’s called ROOM because we want to make room for people, make room for privacy, and make room for a better work environment.”

Phone Booths, Not Sweatboxes

You might be asking yourself, enterprising reader, why you couldn’t just go to Home Depot, buy some supplies, and build your own in-office phone booth for way less than $3,500. Well, ROOM’s co-founders tried that. The result was…moist.

Meisner-Jensen has design experience from the Danish digital agency Revolt that he started befor co-founding digital book service Mofibo and selling it to Storytel. “In my old job we had to go outside and take the class, and I’m from Copenhagen so that’s a pretty cold experience half the year.” His co-founder Brian Chen started Y Combinator-backed smart suitcase company Bluesmart where he was VP of operations. They figured they could attack the office layout issue with hammers and saws. I mean, they do look like superhero alter-egos.

Room co-founders (from left): Brian Chen and Morten Meisner-Jensen

“To combat the issues I myself would personally encounter with open offices, as well as colleagues, we tried to build a private ‘phone booth’ ourselves” says Meisner-Jensen. “We didn’t quite understand the specifics of air ventilation or acoustics at the time, so the booth got quite warm – warm enough that we coined it ‘the sweatbox.'”

With ROOM, they got serious about the product. The 10 square foot ROOM One booth ships flat and can be assembled in under 30 minutes by two people with a hex wrench. All it needs is an outlet to plug into to power its light and ventilation fan. Each is built from 1088 recycled plastic bottles for noise cancelling so you’re not supposed to hear anything from outsides. The whole box is 100 percent recyclable plus ith can be torn down and rebuilt if your startup implodes and you’re being evicted from your office.

The ROOM One features a bar-height desk with outlets and a magnetic bulletin board behind it, though you’ll have to provide your own stool of choice. It actually designed not to be so comfy that you end up napping inside, which doesn’t seem like it’d be a problem with this somewhat cramped spot. “To solve the problem with noise at scale you want to provide people with space to take a call but not camp out all day” Meisner-Jensen notes.

Booths by Zenbooth, Cubicall, and TalkBox (from left)

A Place To Get Into Flow

Couldn’t office managers just buy noise-cancelling headphones for everyone? “It feels claustrophobic to me” he laughs, but then outlines why a new workplace trend requires more than headphones. “People are doing video calls and virtual meetings much, much more. You can’t have all these people walking by you and looking at your screen. [A booth is] also giving you your own space to do your own work which I don’t think you’d get from a pair of Bose. I think it has to be a physical space.”

But with plenty of companies able to construct physical spaces, it will be a challenge for ROOM to convey to subtleties of its build quality that warrant its price. “The biggest risk for ROOM right now are copycats” Meisner-Jensen admits. “Someone entering our space claiming to do what we’re doing better but cheaper.” Alternatively, ROOM could lock in customers by offering a range of office furniture products. The co-founder hinted at future products, saying ROOM is already receiving demand for bigger multi-person prefab conference rooms and creative room divider solutions.

The importance of privacy goes beyond improved productivity when workers are alone. If they’re exhausted from overstimulation in a chaotic open office, they’ll have less energy for purposeful collaboration when the time comes. The bustle could also make them reluctant to socialize in off-hours, which could lead them to burn out and change jobs faster. Tech companies in particular are in a constant war for talent, and ROOM Ones could be perceived as a bigger perk than free snacks or a ping-pong table that only makes the office louder.

“I don’t think the solution is to go back to a world of cubicles and corner offices” Meisner-Jensen concludes. It could take another decade for office architects to correct the overenthusiasm for open offices despite the research suggesting their harm. For now, ROOM’s co-founder is concentrating on “solving the issue of noise at scale” by asking “How do we make the current workspaces work in the best way possible?”

This robot maintains tender, unnerving eye contact

Humans already find it unnerving enough when extremely alien-looking robots are kicked and interfered with, so one can only imagine how much worse it will be when they make unbroken eye contact and mirror your expressions while you heap abuse on them. This is the future we have selected.

Humans already find it unnerving enough when extremely alien-looking robots are kicked and interfered with, so one can only imagine how much worse it will be when they make unbroken eye contact and mirror your expressions while you heap abuse on them. This is the future we have selected.

The Simulative Emotional Expression Robot, or SEER, was on display at SIGGRAPH here in Vancouver, and it’s definitely an experience. The robot, a creation of Takayuki Todo, is a small humanoid head and neck that responds to the nearest person by making eye contact and imitating their expression.

It doesn’t sound like much, but it’s pretty complex to execute well, which, despite a few glitches, SEER managed to do.

At present it alternates between two modes: imitative and eye contact. Both, of course, rely on a nearby (or, one can imagine, built-in) camera that recognizes and tracks the features of your face in real time.

In imitative mode the positions of the viewer’s eyebrows and eyelids, and the position of their head, are mirrored by SEER. It’s not perfect — it occasionally freaks out or vibrates because of noisy face data — but when it worked it managed rather a good version of what I was giving it. Real humans are more expressive, naturally, but this little face with its creepily realistic eyes plunged deeply into the uncanny valley and nearly climbed the far side.

Eye contact mode has the robot moving on its own while, as you might guess, making uninterrupted eye contact with whoever is nearest. It’s a bit creepy, but not in the way that some robots are — when you’re looked at by inadequately modeled faces, it just feels like bad VFX. In this case it was more the surprising amount of empathy you suddenly feel for this little machine.

That’s largely due to the delicate, childlike, neutral sculpting of the face and highly realistic eyes. If an Amazon Echo had those eyes, you’d never forget it was listening to everything you say. You might even tell it your problems.

This is just an art project for now, but the tech behind it is definitely the kind of thing you can expect to be integrated with virtual assistants and the like in the near future. Whether that’s a good thing or a bad one I guess we’ll find out together.

Smart speaker sales on pace to increase 50 percent by 2019

It seems Amazon didn’t know what it had on its hands when it released the first Echo in late-2014. The AI-powered speaker formed the foundation of the next been moment in consumer electronics. Those devices have helped mainstreaming consumer AI and open the door to wide scale adoption of connected home products.  New numbers from […]

It seems Amazon didn’t know what it had on its hands when it released the first Echo in late-2014. The AI-powered speaker formed the foundation of the next been moment in consumer electronics. Those devices have helped mainstreaming consumer AI and open the door to wide scale adoption of connected home products. 

New numbers from NPD, naturally, don’t show any sign of flagging for the category. According to the firm, the devices are set for a 50-percent dollar growth from between 2016-2017 to 2018-2019. The category is projected to add $1.6 billion through next year.

The Echo line has grown rapidly over the past four years, with Amazon adding the best-selling Dot and screen enabled products like the Spot and Show. Google, meanwhile, has been breathing down the company’s next with its own Home offerings. The company also recently added a trio of “smart displays” designed by LG, Lenovo and JBL.

A new premium category has also arisen, led by Apple’s first entry into the space, the HomePod. Google has similarly offered up the Home Max, and Samsung is set to follow suit with the upcoming Galaxy Home (which more or less looks like a HomePod on a tripod).

As all of the above players were no doubt hoping, smart speaker sales also appear to be driving sales of smart home products, with 19 percent of U.S. consumers planning to purchase one within the next year, according to the firm.

StarVR’s One headset flaunts eye-tracking and a double-wide field of view

While the field of VR headsets used to be more or less limited to Oculus and Vive, numerous competitors have sprung up as the technology has matured — and some are out to beat the market leaders at their own game. StarVR’s latest headset brings eye-tracking and a seriously expanded field of view to the game, and the latter especially is a treat to experience.

While the field of VR headsets used to be more or less limited to Oculus and Vive, numerous competitors have sprung up as the technology has matured — and some are out to beat the market leaders at their own game. StarVR’s latest headset brings eye-tracking and a seriously expanded field of view to the game, and the latter especially is a treat to experience.

The company announced the new hardware at SIGGRAPH in Vancouver, where I got to go hands-on and eyes-in with the headset. Before you get too excited, though, keep in mind this set is meant for commercial applications — car showrooms, aircraft simulators, and so on. What that means is it’s going to be expensive and not as polished a user experience as consumer-focused sets.

That said, the improvements present in the StarVR One are significant and immediately obvious. Most important is probably the expanded FOV — 210 degrees horizontal and 130 vertical. That’s nearly twice as wide as the 110 degrees wide that the most popular headsets have, and believe me, it makes a difference. (I haven’t tried the Pimax 8K, which has a similarly wide FOV.)

On Vive and Oculus sets I always had the feeling that I was looking through a hole into the VR world — a large hole, to be sure, but having your peripheral vision be essentially blank made it a bit claustrophobic.

In the StarVR headset, I felt like the virtual environment was actually around me, not just in front of me. I moved my eyes around much more rather than turning my head, with no worries about accidentally gazing at the fuzzy edge of the display. A 90 Hz refresh rate meant things were nice and smooth.

To throw shade at competitors, the demo I played (I was a giant cyber-ape defending a tower) could switch between the full FOV and a simulation of the 110-degree one found in other headsets. I suspect it was slightly exaggerated, but the difference really is clear.

It’s reasonably light and comfortable — no VR headset is really either. But it doesn’t feel as chunky as it looks.

The resolution of the custom AMOLED display is supposedly 5K. But the company declined to specify the actual resolution when I asked. They did, however, proudly proclaim full RGB pixels and 16 million sub-pixels. Let’s do the math:

16 million divided by 3 makes around 5.3 million full pixels. 5K isn’t a real standard, just shorthand for having around 5,000 horizontal pixels between the two displays. Divide 5.3 million by that and you get 1060. Rounding those off to semi-known numbers gives us 2560 pixels (per eye) for the horizontal and 1080 for the vertical resolution.

That doesn’t fit the approximately 16:10 ratio of the field of view, but who knows? Let’s not get too bogged down in unknowns. Resolution isn’t everything — but generally, the more pixels the better.

The other major new inclusion is an eye-tracking system provided by Tobii. We knew eye-tracking in VR was coming; it was demonstrated at CES, and the Fove Kickstarter showed it was at least conceivable to integrate into a headset now-ish.

Unfortunately the demos of eye-tracking were pretty limited (think a heatmap of where you looked on a car) so, being hungry, I skipped them. The promise is good enough for now — eye tracking allows for all kinds of things, including a “foveated rendering” that focuses display power where you’re looking. This too was not being shown, however, and it strikes me that it is likely phenomenally difficult to pull off well — so it may be a while before we see a good demo of it.

One small but welcome improvement that eye-tracking also enables is automatic detection of intrapupillary distance, or IPD — it’s different for everyone and can be important to rendering the image correctly. One less thing to worry about.

The StarVR One is compatible with SteamVR tracking, or you can get the XT version and build your own optical tracking rig — that’s for the commercial providers for whom it’s an option.

Although this headset will be going to high-end commercial types, you can bet that the wide FOV and eye tracking in it will be standard in the next generation of consumer devices. Having tried most of the other headsets, I can say with certainty that I wouldn’t want to go back to some of them after having experienced this one. VR is still a long way off from convincing me it’s worthwhile, but major improvements like these definitely help.

Pandora Premium comes to Google Assistant-powered devices

Pandora Premium is coming to Google Home, Mini, and Max devices, and other smart speakers and screens with Google Assistant built-in, the company announced this morning. The integration means listeners who pay for Pandora’s on-demand music service will be able to search and play any song, album, or playlist, just by asking Google, and can […]

Pandora Premium is coming to Google Home, Mini, and Max devices, and other smart speakers and screens with Google Assistant built-in, the company announced this morning. The integration means listeners who pay for Pandora’s on-demand music service will be able to search and play any song, album, or playlist, just by asking Google, and can even search by lyrics, play their personalized “mood” playlists, and take other actions using their voice.

For example, Google Assistant users will be able to thumbs up and thumbs down tracks on Pandora, skip tracks, create new stations, or play a song again, using voice commands.

The service can also be set as the default on Google Home, so you don’t have to specify to play the songs via Pandora when issuing commands.

Support for Pandora Premium on Google Home has been long-awaited. Pandora Plus and Pandora’s free service have been available on Google Home since November 2016.

The Premium service, however, is Pandora’s true Spotify competitor, offering a more robust feature set in addition to on-demand music.

Access to personalized soundtracks is one of Pandora Premium’s newer features, and a potential selling point for the company’s top-tier service, along with this new Google Assistant integration.

In an effort to challenge Spotify, Pandora this spring rolled out its own set of personalized playlists based on listening behavior and other factors, built using its Music Genome. This made Pandora capable of creating over 60 personalized playlists. Most users will only see a subset of those – like “party soundtracks, or those for moods like “happy” or “rainy days,” or those for various genres of music they like. Now these, too, can stream over Google Assistant-powered devices.

The ability to search by lyrics is another benefit to using Pandora Premium on Google Assistant devices – and an area where Spotify is glaringly absent. Not only does Spotify not offer lyrics search, it doesn’t even offer lyrics. And we’re hearing that it has no plans to launch this feature anytime soon, though it continues to test this. (For example.)

Meanwhile, Spotify’s rivals are offering search by lyrics, including Amazon Music – which lets you do lyrics searches using Alexa – and Apple, which is rolling out lyrics search in the latest version of Apple Music. Many Spotify users are beginning to notice this missing feature, and regularly complain. At some point, Spotify’s inability to keep up with the market on voice (it has just barely managed a voice search button) and lyrics could give competitors an edge, along with Spotify’s lack of hardware, like Apple’s HomePod or Amazon’s Echo.

Apple Music, for instance, is now ahead of Spotify in North America, according to statements made by Apple CEO Tim Cook during the last earnings call.

Pandora’s potential is more of a mixed bag. There’s a growing market of those who pay for Pandora’s service. The company reported in July. It added 351,000 paying customers across both Premium and the mid-level tier, Pandora Plus, in the last quarter, bringing the total paying customer base to 6 million. That’s up 23% year-over-year. But its total active user base was down 6% year-over-year to 71.4 million.

But Pandora is addressing the needs of cross-platform support, in an effort to meet users anywhere they want to stream. It now supports over 2,000 connected devices including TVs, smart speakers, game consoles, streaming players, and more. These days, its listeners are increasingly using Pandora through voice-activated devices – up nearly 50% since last year, the company says.

Pandora is offering a free 90-day trial of Premium to Google Home users via the Google Home app on Android or the Play Store.

Pandora Premium was one of two major additions to Google Assistant devices on Tuesday – Deezer is also now available, allowing customers access to more than 36 million HiFi tracks and voice support, Google noted.

Nvidia’s new Turing architecture is all about real-time ray tracing and AI

In recent days, word about Nvidia’s new Turing architecture started leaking out of the Santa Clara-based company’s headquarters. So it didn’t come as a major surprise that the company today announced during its Siggraph keynote the launch of this new architecture and three new pro-oriented workstation graphics cards in its Quadro family. Nvidia describes the […]

In recent days, word about Nvidia’s new Turing architecture started leaking out of the Santa Clara-based company’s headquarters. So it didn’t come as a major surprise that the company today announced during its Siggraph keynote the launch of this new architecture and three new pro-oriented workstation graphics cards in its Quadro family.

Nvidia describes the new Turing architecture as “the greatest leap since the invention of the CUDA GPU in 2006.” That’s a high bar to clear, but there may be a kernel of truth here. These new Quadro RTx chips are the first to feature the company’s new RT Cores. “RT” here stands for ray tracing, a rendering method that basically traces the path of light as it interacts with the objects in a scene. This technique has been around for a very long time (remember POV-Ray on the Amiga?). Traditionally, though, it was always very computationally intensive, though the results tend to look far more realistic. In recent years, ray tracing got a new boost thanks to faster GPUs and support from the likes of Microsoft, which recently added ray tracing support to DirectX.

“Hybrid rendering will change the industry, opening up amazing possibilities that enhance our lives with more beautiful designs, richer entertainment and more interactive experiences,” said Nvidia CEO Jensen Huang. “The arrival of real-time ray tracing is the Holy Grail of our industry.”

The new RT cores can accelerate ray tracing by up to 25 times compared to Nvidia’s Pascal architecture, and Nvidia claims 10 GigaRays a second for the maximum performance.

Unsurprisingly, the three new Turing-based Quadro GPUs will also feature the company’s AI-centric Tensor Cores, as well as 4,608 CUDA cores that can deliver up to 16 trillion floating point operations in parallel with 16 trillion integer operations per second. The chips feature GDDR6 memory to expedite things, and support Nvidia’s NVLink technology to scale up memory capacity to up to 96GB and 100GB/s of bandwidth.

The AI part here is more important than it may seem at first. With NGX, Nvidia today also launched a new platform that aims to bring AI into the graphics pipelines. “NGX technology brings capabilities such as taking a standard camera feed and creating super slow motion like you’d get from a $100,000+ specialized camera,” the company explains, and also notes that filmmakers could use this technology to easily remove wires from photographs or replace missing pixels with the right background.

On the software side, Nvidia also today announced that it is open sourcing its Material Definition Language (MDL).

Companies ranging from Adobe (for Dimension CC) to Pixar, Siemens, Black Magic, Weta Digital, Epic Games and Autodesk have already signed up to support the new Turing architecture.

All of this power comes at a price, of course. The new Quadro RTX line starts at $2,300 for a 16GB version, while stepping up to 24GB will set you back $6,300. Double that memory to 48GB and Nvidia expects that you’ll pay about $10,000 for this high-end card.

What the rumors say about Google’s upcoming Pixel 3

Now that the Note 9’s all good and official, it’s time to move onto the next major smartphone. The Google Pixel 3 leaks haven’t quite hit the fever pitch we saw with Samsung’s device ahead of launch — though there’s still time. After all, it seems likely the latest version of Google’s flagship Android handset […]

Now that the Note 9’s all good and official, it’s time to move onto the next major smartphone. The Google Pixel 3 leaks haven’t quite hit the fever pitch we saw with Samsung’s device ahead of launch — though there’s still time. After all, it seems likely the latest version of Google’s flagship Android handset won’t officially be official until October.

Even so, we’ve already seen a handful of credible links, including a full unboxing last week, so we’re starting to develop a pretty good picture of what we’re in for with the device.

For starters, there’s what looks to be a pretty sizable top notch. That Google would embrace the notch this time around is no surprise, really. In additional to being all the rage on practically every non-Samsung flagship, Google made a big deal of making Android Pie notch-friendly.

It seems to follow, then, that the company would embrace the polarizing design decision. That said, even by today’s notch-embracing standards, this is a big one. If anything, it seems that notches are actually getting bigger since Essential helped kickstart the trend by adding one to its first phone.

Speaking of embracing trends, Google dropped the headphone jack for the Pixel 2, after mocking Apple’s decision to do so a year prior. From the looks of it, the company is helping ease the transition with a pair of USB-C headphones, forgoing the necessity for a dongle (there does, however, still appear to be one in the box). Of course, you’ll still have to figure out a way to listen to music while charging the phone.

The design language is very similar to the company’s Pixel Buds, complete with loops for keeping them in place. It’s probably going too far to call them wired Pixel Buds, with all of the functionality that entails (translation and the like), but the company does appear to be taking some cues from the lukewarmly received wireless earbuds.

The Google Pixel XL, meanwhile, appears to be going really large this time out. The new 6.4-inch Note 9’s got nothing on what’s reported to be a 6.7-inch display. We’re getting to the point where these things are basically tablets with calling capabilities. Of course, Samsung’s got the benefit of years of product design that have made it possible to sneak a large display into a relatively small footprint. Without actually holding the new device, it’s hard to say how unwieldy it really is.

Other bits and bobs include a Snapdragon 845, which is basically a prerequisite for any flagship smartphone to be taken seriously. The XL is also rumored to have a 3,430 mAh battery — actually a downgrade over last year’s model, in spite of yet another massive bump in screen size.

Chromebooks could dual-boot Windows 10 soon

Chrome OS has come a long way in the past few years. Even so, it’s still not the full-fledged operating system many of us required on our desktop machines. Google is reportedly looking to address that, in part, by adding the ability for users to dual-boot in Windows 10. According to XDA-Developers, the company is […]

Chrome OS has come a long way in the past few years. Even so, it’s still not the full-fledged operating system many of us required on our desktop machines. Google is reportedly looking to address that, in part, by adding the ability for users to dual-boot in Windows 10.

According to XDA-Developers, the company is actively courting Microsoft hardware certification for its flagship Chromebook, the Pixelbook. The “alt OS mode” codenamed “Campfire,” is said to be coming to the Pixelbook in the not-too-distant future, with more Chromebook support down the line.

Which devices would actually be able to support Microsoft’s once ubiquitous operating system is dependent on, among other things, system specs. Microsoft’s worked to make Windows compatible with low-end systems, but even by those standards, some super cheap Chromebooks don’t boast the built-in storage required to run both Chrome OS and Windows 10. For all of its faults, maybe Windows 10S would be a decent secondary platform. 

Windows 10 on the Pixelbook is a compelling proposition. The high-end Chromebook is a lovely piece of hardware, but even with the addition of Android apps, there are still some software gaps. I took the device on a recent trip to China and was disappointed by  some of the limitations I ran into on an otherwise fine device. 

It’s suggested that all of this could come as soon as Google’s upcoming Pixel 3 event. Given a number of recent leaks, it does appear that the company’s got something big planned for the near term.

Samsung Galaxy Note 9: an AR Emojireview

Hi, I wrote a 3K word review of the new Samsung Galaxy Note 9. But you’re busy and it’s the weekend. I get it. For the sake of saving time, here’s a distilled version, narrated by the magic of the company’s deeply troubling AR Emoji version of me. Design Battery Camera Audio/Visual Bixby Price Anyway, […]

Hi, I wrote a 3K word review of the new Samsung Galaxy Note 9. But you’re busy and it’s the weekend. I get it. For the sake of saving time, here’s a distilled version, narrated by the magic of the company’s deeply troubling AR Emoji version of me.

Design

Battery

Camera

Audio/Visual

Bixby

Price

Anyway, just read the damn review. I promise there’s only one of these creepy things in it.

NASA’s Parker Solar Probe launches tonight to ‘touch the sun’

NASA’s ambitious mission to go closer to the Sun than ever before is set to launch in the small hours between Friday and Saturday — at 3:33 AM Eastern from Kennedy Space Center in Florida, to be precise. The Parker Solar Probe, after a handful of gravity assists and preliminary orbits, will enter a stable orbit around the enormous nuclear fireball that gives us all life and sample its radiation from less than 4 million miles away. Believe me, you don’t want to get much closer than that.

NASA’s ambitious mission to go closer to the Sun than ever before is set to launch in the small hours between Friday and Saturday — at 3:33 AM Eastern from Kennedy Space Center in Florida, to be precise. The Parker Solar Probe, after a handful of gravity assists and preliminary orbits, will enter a stable orbit around the enormous nuclear fireball that gives us all life and sample its radiation from less than 4 million miles away. Believe me, you don’t want to get much closer than that.

If you’re up late tonight (technically tomorrow morning), you can watch the launch live on NASA’s stream.

This is the first mission named after a living researcher, in this case Eugene Parker, who in the ’50s made a number of proposals and theories about the way that stars give off energy. He’s the guy who gave us solar wind, and his research was hugely influential in the study of the sun and other stars — but it’s only now that some of his hypotheses can be tested directly. (Parker himself visited the craft during its construction, and will be at the launch. No doubt he is immensely proud and excited about this whole situation.)

“Directly” means going as close to the sun as technology allows — which leads us to the PSP’s first major innovation: its heat shield, or thermal protection system.

There’s one good thing to be said for the heat near the sun: it’s a dry heat. Because there’s no water vapor or gases in space to heat up, find some shade and you’ll be quite comfortable. So the probe is essentially carrying the most heavy-duty parasol ever created.

It’s a sort of carbon sandwich, with superheated carbon composite on the outside and a carbon foam core. All together it’s less than a foot thick, but it reduces the temperature the probe’s instruments are subjected to from 2,500 degrees Fahrenheit to 85 — actually cooler than it is in much of the U.S. right now.

Go on – it’s quite cool.

The car-sized Parker will orbit the sun and constantly rotate itself so the heat shield is facing inward and blocking the brunt of the solar radiation. The instruments mostly sit behind it in a big insulated bundle.

And such instruments! There are three major experiments or instrument sets on the probe.

WISPR (Wide-Field Imager for Parker Solar Probe) is a pair of wide-field telescopes that will watch and image the structure of the corona and solar wind. This is the kind of observation we’ve made before — but never from up close. We generally are seeing these phenomena from the neighborhood of the Earth, nearly 100 million miles away. You can imagine that cutting out 90 million miles of cosmic dust, interfering radiation and other nuisances will produce an amazingly clear picture.

SWEAP (Solar Wind Electrons Alphas and Protons investigation) looks out to the side of the craft to watch the flows of electrons as they are affected by solar wind and other factors. And on the front is the Solar Probe Cup (I suspect this is a reference to the Ray Bradbury story, “Golden Apples of the Sun”), which is exposed to the full strength of the sun’s radiation; a tiny opening allows charged particles in, and by tracking how they pass through a series of charged windows, they can sort them by type and energy.

FIELDS is another that gets the full heat of the sun. Its antennas are the ones sticking out from the sides — they need to in order to directly sample the electric field surrounding the craft. A set of “fluxgate magnetometers,” clearly a made-up name, measure the magnetic field at an incredibly high rate: two million samples per second.

They’re all powered by solar panels, which seems obvious, but actually it’s a difficult proposition to keep the panels from overloading that close to the sun. They hide behind the shield and just peek out at an oblique angle, so only a fraction of the radiation hits them.

Even then, they’ll get so hot that the team needed to implement the first-ever active water cooling system on a spacecraft. Water is pumped through the cells and back behind the shield, where it is cooled by, well, space.

The probe’s mission profile is a complicated one. After escaping the clutches of the Earth, it will swing by Venus, not to get a gravity boost, but “almost like doing a little handbrake turn,” as one official described it. It slows it down and sends it closer to the sun — and it’ll do that seven more times, each time bringing it closer and closer to the sun’s surface, ultimately arriving in a stable orbit 3.83 million miles above the surface — that’s 95 percent of the way from the Earth to the sun.

On the way it will hit a top speed of 430,000 miles per hour, which will make it the fastest spacecraft ever launched.

Parker will make 24 total passes through the corona, and during these times communication with Earth may be interrupted or impractical. If a solar cell is overheating, do you want to wait 20 minutes for a decision from NASA on whether to pull it back? No. This close to the sun even a slight miscalculation results in the reduction of the probe to a cinder, so the team has imbued it with more than the usual autonomy.

It’s covered in sensors in addition to its instruments, and an onboard AI will be empowered to make decisions to rectify anomalies. That sounds worryingly like a HAL 9000 situation, but there are no humans on board to kill, so it’s probably okay.

The mission is scheduled to last seven years, after which time the fuel used to correct the craft’s orbit and orientation is expected to run out. At that point it will continue as long as it can before drift causes it to break apart and, one rather hopes, become part of the sun’s corona itself.

The Parker Solar Probe is scheduled for launch early Saturday morning, and we’ll update this post when it takes off successfully or, as is possible, is delayed until a later date in the launch window.