With no moving parts, this plane flies on the ionic wind

Since planes were invented, they’ve flown using moving parts to push air around. Sure, there are gliders and dirigibles, which float more than fly, but powered flight is all about propellers (that’s why they call them that). Today that changes, with the first-ever ‘solid state’ aircraft, flying with no moving parts at all by generating “ionic wind.”

Since planes were invented, they’ve flown using moving parts to push air around. Sure, there are gliders and dirigibles, which float more than fly, but powered flight is all about propellers (that’s why they call them that). Today that changes, with the first-ever “solid state” aircraft, flying with no moving parts at all by generating “ionic wind.”

If it sounds like science fiction… well, that’s about right. MIT’s Stephen Barrett explains that he took his inspiration directly from Star Trek.

“In the long-term future, planes shouldn’t have propellers and turbines,” Barrett said in an MIT news release. “They should be more like the shuttles in ‘Star Trek,’ that have just a blue glow and silently glide.”

“When I got an appointment at university,” he explained, “I thought, well, now I’ve got the opportunity to explore this, and started looking for physics that enabled that to happen.”

He didn’t discover the principle that ended up making his team’s craft fly — it’s been known about for nearly a century, but has never been able to be applied successfully to flight.

The basic idea is simply that when you have a powerful source of negatively charged electrons, they pass that charge on to the air around them, “ionizing” it, at which point it flows away from that source and toward — if you set it up right — a “collector” surface nearby. (Nature has a much more detailed explanation. The team’s paper was published in the journal today.)

Essentially what you’re doing is making negatively charged air flow in a direction you choose. This phenomenon was recognized in the ’20s, and in the ’60s they even attempted some thrust tests using it. But they were only able to get about 1 percent of the input electricity to work as thrust. That’s inefficient, to say the least.

To tell the truth, Barrett et al.’s system doesn’t do a lot better, only getting 2.6 percent of the input energy back as thrust, but they have the benefit of computer-aided design and super-lightweight materials. The team determined that at a certain weight and wingspan, and with the thrust that can be generated at that scale, flight should theoretically be possible. They’ve spent years pursuing it.

After many, many revisions (and as many crashes) they arrived at this 5-meter-wide, 2.5-kilogram, multi-decker craft, and after a few false starts it flew… for about 10 seconds. They were limited by the length of the room they tested in, and figure it could go farther, but the very fact that it was able to sustain flight significantly beyond the bounds of gliding is proof enough of the concept.

“This is the first-ever sustained flight of a plane with no moving parts in the propulsion system,” Barrett Said. “This has potentially opened new and unexplored possibilities for aircraft which are quieter, mechanically simpler, and do not emit combustion emissions.”

No one, least of all the crew, thinks this is going to replace propellers or jet engines any time soon. But there are lots of applications for a silent and mechanically simple form of propulsion — drones, for instance, could use it for small adjustments or to create soft landings.

There’s lots of work to do. But the goal was to invent a solid-state flying machine, and that’s what they did. The rest is just engineering.

They’re making a real HAL 9000, and it’s called CASE

Don’t panic! Life imitates art, to be sure, but hopefully the researchers in charge of the Cognitive Architecture for Space Exploration, or CASE, have taken the right lessons from 2001: A Space Odyssey, and their AI won’t kill us all and/or expose us to alien artifacts so we enter a state of cosmic nirvana.

Don’t panic! Life imitates art, to be sure, but hopefully the researchers in charge of the Cognitive Architecture for Space Exploration, or CASE, have taken the right lessons from 2001: A Space Odyssey, and their AI won’t kill us all and/or expose us to alien artifacts so we enter a state of cosmic nirvana. (I think that’s what happened.)

CASE is primarily the work of Pete Bonasso, who has been working in AI and robotics for decades — since well before the current vogue of virtual assistants and natural language processing. It’s easy to forget these days that research in this area goes back to the middle of the century, with a boom in the ’80s and ’90s as computing and robotics began to proliferate.

The question is how to intelligently monitor and administrate a complicated environment like that of a space station, crewed spaceship, or a colony on the surface of the Moon or Mars. A simple question with an answer that has been evolving for decades; the International Space Station (which just turned 20) has complex systems governing it and has grown more complex over time — but it’s far from the HAL 9000 that we all think of, and which inspired Bonasso to begin with.

“When people ask me what I am working on, the easiest thing to say is, ‘I am building HAL 9000,’ ” he wrote in a piece published today in the journal Science Robotics. Currently that work is being done under the auspices of TRAC Lab, a research outfit in Houston.

One of the many challenges of this project is marrying the various layers of awareness and activity together. It may be, for example, that a robot arm needs to move something on the outside the habitat. Meanwhile someone may also want to initiate a video call with another part of the colony. There’s no reason for one single system to encompass command and control methods for robotics and a VOIP stack — yet at some point these responsibilities should be known and understood by some overarching agent.

CASE, therefore, isn’t some kind of mega-intelligent know-it-all AI, but an architecture for organizing systems and agents that is itself an intelligent agent. As Bonasso describes in his piece, and as is documented more thoroughly elsewhere, CASE is composed of several “layers” that govern control, routine activities, and planning. A voice interaction system translates human-language queries or commands into tasks those layers can carry out. But it’s the “ontology” system that’s the most important.

Any AI expected to manage a spaceship or colony has to have an intuitive understanding of the people, objects, and processes that make it up. At a basic level, for instance, that might mean knowing that if there’s no one in a room, the lights can turn off to save power but it can’t be depressurized. Or if someone moves a rover from its bay to park it by a solar panel, the AI has to understand that it’s gone, how to describe where it is, and how to plan around its absence.

This type of common sense logic is deceptively difficult and is one of the major problems being tackled in AI today. We have years to learn cause and effect, to gather and put together visual clues to create a map of the world, and so on — for robots and AI, it has to be created from scratch (and they’re not good at improvising). But CASE is working on fitting the pieces together.

Screen showing another ontology system from TRAC Labs, PRONTOE.

“For example,” Bonasso writes, “the user could say, ‘Send the rover to the vehicle bay,’ and CASE would respond, ‘There are two rovers. Rover1 is charging a battery. Shall I send Rover2?’ Alas, if you say, ‘Open the pod bay doors, CASE’ (assuming there are pod bay doors in the habitat), unlike HAL, it will respond, ‘Certainly, Dave,’ because we have no plans to program paranoia into the system.”

I’m not sure why he had to write “alas” — our love of cinema is exceeded by our will to live, surely.

That won’t be a problem for some time to come, of course — CASE is still very much a work in progress.

“We have demonstrated it to manage a simulated base for about 4 hours, but much needs to be done for it to run an actual base,” Bonasso writes. “We are working with what NASA calls analogs, places where humans get together and pretend they are living on a distant planet or the moon. We hope to slowly, piece by piece, work CASE into one or more analogs to determine its value for future space expeditions.”

I’ve asked Bonasso for some more details and will update this post if I hear back.

Whether a CASE- or HAL-like AI will ever be in charge of a base is almost not a question any more — in a way it’s the only reasonable way to manage what will certainly be an immensely complex system of systems. But for obvious reasons it needs to be developed from scratch with an emphasis on safety, reliability… and sanity.

11 moments from the International Space Station’s first 20 years

It was November 20, 1998, when an unprecedented international coalition of astronomers, engineers, and rocket scientists saw years of collaboration come to fruition with the launch of the International Space Station’s first component. Since then the largest spacecraft ever built has hosted innumerable astronauts, experiments, and other craft. Here are a few notable moments in the history of this inspiring and decades-spanning mission.

It was November 20, 1998, when an unprecedented international coalition of astronomers, engineers and rocket scientists saw years of collaboration come to fruition with the launch of the International Space Station’s first component. Since then, the largest spacecraft ever built has hosted innumerable astronauts, experiments and other craft. Here are a few notable moments in the history of this inspiring and decades-spanning mission.

1984: Reagan proposes the ISS — without Russia

The space station was originally going to be a U.S. effort, but soon became a collaboration with Canada, Japan and Europe, excluding the then-USSR. American-Russian relations were strained then, as you may remember, and although many in the space industry itself would have preferred working together, the political climate did not permit it. Nevertheless, initial work began.

1993: Clinton adds Russia to the bill

The collapse of the Soviet Union and subsequent rejuvenation of international relations led President Bush to bring them into the program in a limited fashion, as a supplier and as a guest on a shuttle mission. The next year, however, President Clinton one-upped him with the announcement that Russia would be a full partner. This was both a practical and political decision: Russian involvement would save billions, but it also helped bring Russia on board with other issues, like ICBM de-proliferation efforts. At any rate, designs were finally beginning to be built.

1998: The first components, Zarya and Unity, launch to orbit

Endeavour approaches Zarya when the latter was the only component in place.

Though persona non grata at first, Russia had the privilege of launching the first core component of the ISS on November 20, 1998, the anniversary we are celebrating today. The Zarya Functional Cargo Block is still up there, still being used, forming the gateway to the Russian side of the station.

One month later, Space Shuttle Endeavour took off from Launch Complex 39A (we’ve been there) carrying Unity Node 1. This too is up there now, attached since that day to Zarya.

2000: The first of many long-term occupants arrive

From left: Shepherd, Gidzenko and Krikalev, aboard the station.

Almost exactly a year after Zarya went up, the first astronauts took up residence on the ISS — the first of 230 people so far to call the orbiting structure home. Bill Shepherd was NASA’s first representative, flying with cosmonauts Yuri Gidzenko and Sergei Krikalev; they would stay for about 141 days.

2003: Columbia disaster delays expansion

The fatal breakup of Space Shuttle Columbia on reentry following its 28th mission was tragedy enough that other shuttle missions were scrubbed for over two years. As these were the primary means of the U.S. adding to and maintaining the ISS, this responsibility passed to Roscosmos until shuttle launches resumed in 2005; crewed launches wouldn’t resume until mid-2006.

2007: Kibo goes up

Numerous modules have been added to the ISS over the years, but Japan’s Kibo is the largest. It took multiple missions to deliver all the pieces, and was only made possible by earlier missions that had expanded the solar power capacity of the station. Kibo contains a ton of reconfigurable space accessible from the pressurized interior, and has been popular for both private and public experiments that must be conducted in space.

2010: Enter the Cupola

If Kibo is the largest component, the Cupola is likely the most famous. The giant 7-window bubble looks like something out of science fiction (specifically, the front end of the Millennium Falcon) and is the location for the station’s most striking photography, both inside and out.

2014: Beautiful timelapses

With the Cupola in place, capturing imagery of the Earth from this amazing view became easier — especially with the increasingly high-quality digital cameras brought aboard by talented astronaut-photographers like Alexander Gerst and Don Pettit. The many, many photos taken out of this aperture have been formed into innumerable beautiful timelapses and desktop backgrounds, as well as witnessing incredible phenomena like aurora and lightning storms from a new and valuable perspective. It’s hard to pick just one, but Don Pettit’s “The World Outside My Window” above is a fabulous example, and Gerst’s 4K compilation is another.

2015: Gennady Padalka sets time in space record

During his fifth flight to space, Gennady Padalka set a world record for most time in space: When he returned to Earth he had logged a total of 878 days and change. That’s well ahead of the competition, which is almost exclusively Russian — though NASA’s Peggy Whitson is right up there with 666 days over three missions.

2016: Chinese station calling ISS, please pick up

It’s hardly crowded in space, but it can get lonely up there. So it’s nice that those who have the honor to fly reach out to each other. In this case China’s taikonaut Jing Haipeng recorded a heartwarming video message from the Chinese Tiangong-2 space station greeting the incoming ISS crew and praising the community of global cooperation that makes all this possible.

2018: Soyuz accident threatens long-term occupation

A crewed mission to the ISS with astronaut Nick Hague and cosmonaut Alexey Ovchinin encountered a serious fault during launch, fortunately resulting in no injuries or fatalities but shaking up the space community. The Soyuz rocket and capsule had more than proven themselves over the years but no risks could be taken with human life, and future missions were delayed. It was possible that for the first time since it was first entered, the ISS would be empty as its crew left with no replacements on the way.

Fortunately the investigation has concluded and a new mission is planned for early December, which will prevent such an historic absence.

2019? First commercial crew mission and beyond

Russia has borne sole responsibility for all crewed launches for years; the U.S. has been planning to separate itself from this dependence by fostering a new generation of crew-capable capsules that can meet and exceed the safety and reliability of the Soyuz system. SpaceX and Boeing both plan 2019 flights for their respective Crew Dragon and Starliner capsules — though slipping dates and new regulatory attention may delay those further.

The ISS has a bright future despite its remarkable 20 years of continuous operation. It’s funded more or less through 2025, but there’s talk of new space stations from Russia and China both, while the U.S. eyes lunar orbit for its next big endeavor. It’s hard to imagine space now without an ISS full of people in it, however, and falling launch costs may mean that its life can be extended even further and for less cost. Here’s hoping the ISS has another two decades in front of it.

First ever drone-delivered kidney is no worse for wear

Drone delivery really only seems practical for two things: take-out and organ transplants. Both are relatively light and also extremely time sensitive. Well, experiments in flying a kidney around Baltimore in a refrigerated box have yielded positive results — which also seems promising for getting your pad thai to you in good kit.

Drone delivery really only seems practical for two things: take-out and organ transplants. Both are relatively light and also extremely time sensitive. Well, experiments in flying a kidney around Baltimore in a refrigerated box have yielded positive results — which also seems promising for getting your pad thai to you in good kit.

The test flights were conducted by researchers at the University of Maryland there, led by surgeon Joseph Scalea. He has been frustrated in the past with the inflexibility of air delivery systems, and felt that drones represent an obvious solution to the last-mile problem.

Scalea and his colleagues modified a DJI M600 drone to carry a refrigerated box payload, and also designed a wireless biosensor for monitoring the organ while in flight.

After months of waiting, their study was assigned a kidney that was healthy enough for testing but not good enough for transplant. Once it landed in Baltimore, the team loaded it into the container and had it travel 14 separate missions of various distances and profiles. The longest of these was three miles, a realistic distance between hospitals in the area, and the top speed achieved was 67.6 km/h, or about 42 mph.

Biopsies of the kidney were taken before and after the flights, and also after a reference flight on a small aircraft, which is another common way to transport organs medium distances.

Image credit: Joseph Scalea

The results are good: despite the potential threats of wind chill and heat from the motors of the drone (though this was mitigated by choosing a design with a distal motor-rotor setup), the temperature of the box remained at 2.5 degrees Celsius, just above freezing. And no damage appeared to have been done by the drones’ vibrations or maneuvers.

Restrictions on drones and on how organs can be transported make it unlikely that this type of delivery will be taking place any time soon, but it’s studies like this that make it possible to challenge those restrictions. Once the risk has been quantified, then kidneys, livers, blood, and other tissues or important medical supplies may be transported this way — and in many cases, every minute counts.

One can also imagine the usefulness of this type of thing in disaster situations, when not just ordinary aircraft but also land vehicles may have trouble getting around a city. Drones should be able to carry much-needed supplies — but before they do, they should definitely be studied to make sure they aren’t going to curdle the blood or anything.

The specifics of the study are detailed in a paper published in the IEEE Journal of Translational Engineering in Health and Medicine.

Half-Life turns 20, and we all feel very old

The only thing that’s crazier than the fact that Half-Life was released exactly 20 years ago is that I wrote up its 10th anniversary on this very website… well, 10 years ago. We’ve both aged well, I like to think. But Half-Life has already left a legacy.

The only thing that’s crazier than the fact that Half-Life was released exactly 20 years ago is that I wrote up its 10th anniversary on this very website… well, 10 years ago. We’ve both aged well, I like to think. But Half-Life has already left a legacy.

Half-Life was Valve’s first game, when they were a young game studio and not the giant gaming conglomerate we know them as today. The game was also a big risk — its narrative-heavy gameplay, including the now famous arrival-at-work intro sequence, was a departure from the generally simple shooters of the late ’90s.

At a time when most games were still level-based, Half-Life set forth a continuous (though still largely episodic) journey punctuated with setpiece encounters and more than a few terrifying moments. This story-centric, wide corridor approach would be immensely influential in game design, as would Half-Life’s scarily smart (for the time) enemy AI, particularly the soldiers sent to shut down the Black Mesa facility and everyone in it.

The tantalizing tastes of a larger story in which you were only one part — orchestrated by the still-mysterious G-Man — kept players on the hook through its expansions and eventually its masterful and sadly unfinished sequel.

The multiplayer, too, was a joy. I remember in particular long matches of robots versus scientists in Gasworks, and brutal close-quarters combat trying to escape the air raid in Crossfire. Then of course Team Fortress Classic and all that came after.

But it wasn’t just Half-Life itself that was influential. Valve’s success with this experiment drove it to make further forays into gaming infrastructure, leading to the creation of Steam — now, of course, the world’s leading PC gaming platform. Although there are arguments to be made now that Steam is stuck in the past in many ways, it’s hard to overestimate its effect on the gaming industry over the years.

I replayed the game a couple of years ago and it mostly holds up. The initial chapters are still compelling and creepy, and the action is still fun and frantic. The pacing isn’t so hot and of course the graphics aren’t so hot these days, and of course Xen is still a pain — but overall it’s easy to put yourself back in your ’90s shoes and remember how amazing this was back then.

If you’re thinking of replaying it, however, you might do yourself a favor and instead play Black Mesa, a full-on remake of the game with more modern graphics and a lot of quality of life changes. It’s still largely the same game, just not quite as 1998.

Cryptocurrency chill causes mining speculator Nvidia’s stock to plunge

The cryptocurrency market is an exciting one, but it’s also unpredictable — and when things go south, they take related businesses with them. Nvidia, a hardware giant that has been riding the cryptocurrency wave, saw its stock price take a double-digit hit as it reported vanishing demand for GPUs specializing in crypto-mining.

The cryptocurrency market is an exciting one, but it’s also unpredictable — and when things go south, they take related businesses with them. Nvidia, a hardware giant that has been riding the cryptocurrency wave, saw its stock price take a double-digit hit as it reported vanishing demand for GPUs specializing in crypto-mining.

It’s been a wild year in the GPU market at there were points when ordinary gamers, who have relied on Nvidia for years for the powerful cards used to play the latest games, found inventory scarce for the company’s latest generation of hardware.

The cards had been, and continued to be for some time, bought up by cryptocurrency mining operations all striving to get a leg up on one another. Consumer-grade GPUs are excellent candidates for putting together low-cost, high-performance clusters that excel in solving the type of problems posed in the likes of Bitcoin mining. The cards were essentially paying for themselves due to the profitability of participating in the lucrative markets.

But the those markets, which have been booming for much of the year, have cooled — not to say crashed — and consequently demand for GPUs has cooled as well, as Nvidia’s earnings statements show.

If Nvidia had seen the cryptocurrency boom for what it was at the time — an important but misleading flare in value — it likely would not have produced the estimated $57 million in excess inventory aimed at the miner market. Mid-range gaming GPU sales declined as well, though this seems to have been part of a larger trend.

It will take a couple quarters to get through all that inventory, during which time of course it will have to be steeply discounted, since miners and gamers understand implicitly that improved versions are just around the corner and are unlikely to pay full price for hardware approaching even a minor degree of obsolescence. The misstep caused Nvidia’s price to drop more than 19 percent Thursday, and it has not rallied today.

“This is surely a setback and I wish we had seen it earlier,” said CEO Jensen Huang on a press call following the announcement of the results.

Cryptocurrency markets may never return to the feverish state of competition they existed in for much of 2018. An explosion of “alt coins” and Initial Coin Offerings baffled casual investors in the ecosystem, and scams were (and are) rife. This led to an overall skepticism in the systems as a class, and even sophisticated and proven ones like Ethereum have suffered major devaluation.

There’s no doubt that blockchain and token economies will be a major part of the financial future (among other things) but the feeding frenzy of 2018 seemed unsustainable from the start. Already many cryptocurrency systems are moving away from the arms race of “proof of work” to the more equitable “proof of stake.” That change alone could decimate computing requirements if adopted at large (although established systems like Bitcoin are too far along to change, outside ill-advised forks like Bitcoin Cash).

Don’t bother shedding a tear for Nvidia, though. The company is rolling high and the GPU market is strong. But it seems that it too, alongside millions of others, has suffered the consequences of speculating on cryptocurrency.

E3 slouches towards irrelevance again as Sony announces it’s skipping the show

I like E3. I really do. But it’s also monumentally dumb: game companies spending millions to show off essentially faked content to an increasingly jaded audience. And it’s increasingly out of step with how the gaming industry works. So it should come as no surprise that Sony will be skipping the show more or less altogether this year, joining Nintendo in taking a step back from spectacle.

I like E3 . I really do. But it’s also monumentally dumb: game companies spending millions to show off essentially faked content to an increasingly jaded audience. And it’s increasingly out of step with how the gaming industry works. So it should come as no surprise that Sony will be skipping the show more or less altogether this year, joining Nintendo in taking a step back from spectacle.

Sony has been a part of CES for 20 years and this will be the first one it’s ever missed. I’ve gone to their events every time I’ve attended; I was there for their historic putdown of Microsoft after the latter announced some hugely unpopular restrictions on used games. I think you can actually see me near the front in the broadcast of that one. (You can! I’m at 1:29.)

And E3 has been a part of Sony’s yearly cadence as well. Like other companies, for years Sony hoarded information to debut at E3, TGS, and Gamescom, but E3 was generally where you saw new consoles and flagship titles debut. But as even E3’s organizers have admitted over and over again, that’s not necessarily a good thing.

Too often we have seen half-finished games on stage at E3 that end up cancelled before the year is out, or commitments made to dates the companies can’t possibly keep. Assigning a complex, creative industry to a yearly schedule of major announcements is a great way to burn them out, and that’s exactly what’s happening.

Variety first noticed Sony’s absence from ESA communications. In a statement issued to multiple outlets, Sony said:

As the industry evolves, Sony Interactive Entertainment continues to look for inventive opportunities to engage the community. PlayStation fans mean the world to us and we always want to innovate, think differently and experiment with new ways to delight gamers. As a result, we have decided not to participate in E3 in 2019. We are exploring new and familiar ways to engage our community in 2019 and can’t wait to share our plans with you.

They won’t be alone. Nintendo hasn’t had a real proper E3 press conference in years. Instead, they host a live stream around the event and have a big booth where people mainly just play games. Their Nintendo Direct videos come out throughout the year, when the titles and developers are good and ready.

Microsoft is still there, and still puts on quite a show. I remember the original announcement of the Kinect, probably one of the weirdest and dumbest things I’ve ever taken part in. It was memorable, at least.

But Microsoft is also doing its own thing, announcing throughout the year and on its own terms. The Xbox One X was only hinted at during E3, and announced in full much later. I wouldn’t be surprised if Microsoft also announced they were taking it easy this year at E3 — though this might also be a good opportunity for them to double down. With the schedules these huge shows go on, they might already be committed to one course or another.

Sony actually has its own PlayStation Experience event where it announces things and lets gamers and press play the latest, but even that was cancelled ahead of its expected December date. Is Sony just getting shy?

More likely they are leveraging their dominance in the console market to be a market leader and “decider,” as they say. They have no shortage of amazing games coming out, including lots of hot-looking exclusives. What have they got to prove? Although Sony itself is not participating in E3, the developers it backs will almost certainly be there. What better way to school the competition than to not show up and still have everyone talking about you?

With the PS4 Pro out there and a solid line-up already confirmed, Sony is sitting pretty for 2019, and the company probably feels this is a safe time to experiment with “inventive opportunities to engage the community,” as the statement put it. E3 will still be big, and it will still be fun. But the trend is clear: it just won’t be necessary.

Cassette decks from Crosley take aim at tape-hoarding nostalgia-seekers

Crosley, makers of the “good enough” record players you see in Urban Outfitters and Target, have turned their retro novelty eye on the next obvious format: cassettes. These two new decks from the company have all the latest features from 1985, but also a handful of modern conveniences.

Crosley, makers of the “good enough” record players you see in Urban Outfitters and Target, have turned their retro novelty eye on the next obvious format: cassettes. These two new decks from the company have all the latest features from 1985, but also a handful of modern conveniences.

Let’s get one thing clear at the outset: these are certainly ridiculous. And yes, you can buy a boom box with a cassette deck right now, new, for $30 or so. But having browsed the stock I can tell you that most of them are pretty ugly. There are vintage ones too, but not all have aged well and may have unfixable issues like corrosion or motor problems.

And believe it or not, tapes are still around. People are manufacturing and recording on them because they’re fun and retro and analog. I’ve bought a few myself at shows in the last year.

So there is actually a market for a new, decent-looking, portable cassette player and radio.

The Crosley devices are pretty straightforward. There are two models; Each has a big mono speaker, a single-direction deck (meaning you’ll have to flip the tape), an AM/FM radio, and a built-in mic. The $60 CT100 model (top) has shortwave radio bands as well, and the capability to play music from an SD card or USB drive, while the $70 CT200 has treble and bass dials and a VU meter for easier recording of cassette-based podcasts. Both have handles.

Of the two I’d definitely go with the CT100, since presumably you can use the SD/USB player to record mixtapes of stuff you’ve downloaded. Record a little intro with the mic or pretend you’re the DJ between songs, and boom, it’s like you’re me in 1994. Plus you never know when shortwave will come in handy.

It’s silly, but it’s a silly world we live in. Silly and horrible. Maybe bringing back cassettes will help. Keep an eye out for these players wherever fake Ray-Bans plaid scarves are sold.

FCC approval of Europe’s Galileo satellite signals may give your phone’s GPS a boost

The FCC’s space-focused meeting today had actions taken on SpaceX satellites and orbital debris reduction, but the decision most likely to affect users has to do with Galileo. No, not the astronomer — the global positioning satellite constellation put in place by the E.U. over the last few years. It’s now legal for U.S. phones to use, and a simple software update could soon give your GPS signal a major bump.

The FCC’s space-focused meeting today had actions taken on SpaceX satellites and orbital debris reduction, but the decision most likely to affect users has to do with Galileo . No, not the astronomer — the global positioning satellite constellation put in place by the E.U. over the last few years. It’s now legal for U.S. phones to use, and a simple software update could soon give your GPS signal a major bump.

Galileo is one of several successors to the Global Positioning System that’s been in use since the ’90s. But because it is U.S.-managed and was for a long time artificially limited in accuracy to everyone but U.S. military, it should come as no surprise that European, Russian and Chinese authorities would want their own solutions. Russia’s GLONASS is operational and China is hard at work getting its BeiDou system online.

The E.U.’s answer to GPS was Galileo, and the 26 (out of 30 planned) satellites making up the constellation offer improved accuracy and other services, such as altitude positioning. Test satellites went up as early as 2005, but it wasn’t until 2016 that it began actually offering location services.

A Galileo satellite launch earlier this year.

Devices already existed that would take advantage of Galileo signals — all the way back to the iPhone 6s, the Samsung Galaxy S7 and many others from that era forward. It just depends on the wireless chip inside the phone or navigation unit, and it’s pretty much standard now. (There’s a partial list of smartphones supporting Galileo here.)

When a company sells a new phone, it’s much easier to just make a couple million of the same thing rather than make tiny changes like using a wireless chipset in U.S. models that doesn’t support Galileo. The trade-off in savings versus complexity of manufacturing and distribution just isn’t worthwhile.

The thing is, American phones couldn’t use Galileo because the FCC has regulations against having ground stations being in contact with foreign satellites. Which is exactly what using Galileo positioning is, though of course it’s nothing sinister.

If you’re in the U.S., then, your phone likely has the capability to use Galileo but it has been disabled in software. The FCC decision today lets device makers change that, and the result could be much-improved location services. (One band not very compatible with existing U.S. navigation services has been held back, but two of the three are now available.)

Interestingly enough, however, your phone may already be using Galileo without your or the FCC’s knowledge. Because the capability is behind a software lock, it’s possible that a user could install an app or service bringing it into use. Perhaps you travel to Europe a lot and use a French app store and navigation app designed to work with Galileo and it unlocked the bands. There’d be nothing wrong with that.

Or perhaps you installed a custom ROM that included the ability to check the Galileo signal. That’s technically illegal, but the thing is there’s basically no way for anyone to tell! The way these systems work, all you’d be doing is receiving a signal illegally that your phone already supports and that’s already hitting its antennas every second — so who’s going to report you?

It’s unlikely that phone makers have secretly enabled the Galileo frequencies on U.S. models, but as Commissioner Jessica Rosenworcel pointed out in a statement accompanying the FCC action, that doesn’t mean it isn’t happening:

If you read the record in this proceeding and others like it, it becomes clear that many devices in the United States are already operating with foreign signals. But nowhere in our record is there a good picture of how many devices in this country are interacting with these foreign satellite systems, what it means for compliance with our rules, and what it means for the security of our systems. We should change that. Technology has gotten ahead of our approval policies and it’s time for a true-up.

She isn’t suggesting a crackdown — this is about regulation lagging behind consumer tech. Still, it is a little worrying that the FCC basically has no idea, and no way to find out, how many devices are illicitly tuning in to Galileo signals.

Expect an update to roll out to your phone sometime soon — Galileo signals will be of serious benefit to any location-based app, and to public services like 911, which are now officially allowed to use the more accurate service to determine location.

SpaceX gets FCC approval to add 7,518 more satellites to its Starlink constellation

SpaceX’s application to add thousands of satellites to its proposed Starlink communications constellation has been approved by the FCC, though it will be some time before the company actually puts those birds in the air.

SpaceX’s application to add thousands of satellites to its proposed Starlink communications constellation has been approved by the FCC, though it will be some time before the company actually puts those birds in the air.

Starlink is just one of many companies that the FCC gave the green light to today at its monthly meeting. Kepler, Telesat, and LeoSat also got approval for various services, though with 140, 117, and 78 satellites proposed respectively, they aren’t nearly as ambitious in scale. Several others were approved as well with smaller proposals.

SpaceX officially applied to put these 7,518 satellites into orbit — alongside the already approved 4,409 — back in March of 2017. Last month the FCC indicated it planned to approve the request by circulating a draft order (PDF) to that effect, which it today made official.

These satellites would orbit at the extremely low (for satellites) altitude of around 340 kilometers — even lower than the 550-kilometer orbit it plans to put 1,584 satellites in from the other group.

Low orbits decay quickly and satellites may only last a couple years before they burn up. But being closer to the Earth also means that latency and required power for signals is considerably lower. It requires more satellites to cover a given area, but if managed properly it’ll produce a faster, more reliable connection or augment the system in areas where demand is high. Since SpaceX has only launched two test satellites so far, this is more or less theoretical, though.

The satellites would also be using V-band radio rather than the more common Ka/Ku band often employed by this general type of service, which as it points out will keep those popular bands unclogged as satellite numbers multiply.

Launches of the new system should begin some time next year if the new management at Starlink wants to keep their jobs. It would take quite a long time to get enough satellites into orbit that the service would work even in barebones fashion, but it isn’t bad going from idea to minimum viable product in a handful of years, when that MVP has to be hundreds of satellites actually in space.

You might be wondering whether this all will produce rather a lot of trash in orbit, since all these launches and the satellites themselves produce waste of various kinds. Well, SpaceX is one of the good ones here, as not only is it pursuing reusable first stages instead of having them float off and break up, but low orbit satellites like these are the least likely to clutter space. Rocket Lab, which just raised $140 million after sending up its own first commercial mission to space, is also very focused on this problem.

The FCC is, for some reason, one of the major authorities on orbital debris, and is currently looking at revising its rules.

“It’s been over a decade since we last reviewed our orbital debris rules, and in that time, the number of satellites in use has increased dramatically,” said FCC Chairman Ajit Pai in a statement accompanying the news. “So it’s high time for the Commission to take up this important topic once again.”

Commissioner Jessica Rosenworcel, one of the driving forces behind the effort, was lukewarm on the current effort.

The agency needs to “do more than just accelerate this problem by rubber stamping every next-generation satellite application that comes our way using yesterday’s orbital debris rules,” she wrote in a statement, and today’s rulemaking proposal is “only a timid start.”

“Moreover, I am concerned it does not set this agency up for success in the future. It misses the forest for the trees. It also muddles the path forward. This is not the leadership we need as we embark on a new era in space. We need clear guidance from this agency.”

The proposed rules are not close to final or complete, but should be public soon — we’ll take a good look at them when that happens and see how the FCC plans to fight the orbital debris problem before it turns into a crisis.