Quantcast
Channel: Special Articles - Committee for Skeptical Inquiry
Viewing all articles
Browse latest Browse all 856

Curated Crowdsourcing in UFO Investigations

$
0
0

At 1:52pm on November 11, 2014, a Chilean Navy helicopter was flying along the coast about eighty miles southwest of Santiago airport. They were testing their new infrared camera, a Wescam MX-15 FLIR. It was a nice late-spring day with clear blue skies and low clouds over the nearby mountains.

The crew of the chopper spotted something white flying far off to the north. They could not identify it, so they looked at it with their new camera and started to try to follow it while recording video. The object looked like two linked black orbs in the infrared footage but like an indistinct white shape in the regular footage. At one point the object appeared to emit some strange substance that seemingly showed up as hot like the object. They continued to follow it, but it was moving too fast and they eventually lost sight of it and had to return to base.

Since this flying object was unidentified, the video footage ended up at the Committee for Study of Anomalous Aerial Phenomena (CEFAA), the official UFO investigating body of the Chilean DGAC (the DGAC is the equivalent of the FAA in the United States, or the CAA in the United Kingdom). It’s an impressive sounding body chaired until last December by a retired Air Force general, Ricardo Bermúdez, with two permanent investigators. They also list as associates the following categories of scientists and experts: “astronomers, geographers, nuclear chemist, physicists, psychologists, aerospace medicine, air traffic controllers, meteorologist, aeronautics researchers, pilot inspectors, aerospace engineers, and imagery analysts.”

It’s indeed an impressive list, and if the object had a prosaic explanation, then you would certainly expect such a group of experts to be able to identify it—or at least provide a plausible explanation.

Thermal camera image of the mysterious object leaving a trail.

But on and off for about two years, CEFAA looked into this case without finding such an explanation. They studied the crew accounts, the video, and other data they thought relevant. They enlisted the help of French UFO analysts who concluded that it was probably a plane coming into land at Santiago airport dumping waste water before landing. The Chileans rejected this explanation because no planes were landing at that time.

They asked meteorologists if it was a weather balloon and were told no. An astrophysicist checked to see if it could be space junk and told them it could not be. A Navy admiral told them there were no Navy exercises or secret aircraft flying in the area. The DGAC officials confirmed it was not a drone. Air Force photo analysis determined that the object was not a bird. Numerous other experts ruled out numerous other explanations.

So CEFAA could not figure out what it was, and after two years they made an announcement declaring it a genuine unexplained phenomena. They released the video via writer Leslie Kean of the Huffington Post, who then wrote an article about the case, published on January 5, 2017.

The article and video went viral, quickly racking up over a million views. It caused great excitement—at last there was a “real” UFO video, certified unknown by the military, countless experts, and years of study.

Five days later, the case was solved. On January 6, Scott Brando from the UFO of Interest blog tweeted me a link and asked “Could this be a plane and its contrail?” I looked at the video and it instantly reminded me of a type of contrails I often see from my home. I live about 100 miles east of San Francisco, and the departing traffic often climbs through 25,000 feet as it heads over the Sierra Mountains. At that altitude, you often see short segments of aerodynamic contrails (which occur at lower altitudes than the normal exhaust contrails). So I wrote my first post on the Metabunk Skydentify forum on the topic: “My initial interpretation of this is that it is a plane, flying away from the camera, considerably higher than the helicopter (somewhere around 15,000 to 25,000 feet), that briefly creates an aerodynamic contrail. The two dots are flared from the heat of the engine.”

Skydentify is a sub-forum on Metabunk.com where people who enjoy identifying unknown aerial objects collaborate to do just that. We specialize in aircraft and contrails but cover all kind of unidentified phenomena, so this was just the type of thing the Skydentify crowd likes. The next day, January 7, @Trailblazer posted that he had found the ADS-B coverage for that area on that day.

ADS-B is a relatively new system where aircraft use GPS to locate themselves and then broadcast their position, altitude, heading, etc., over radio to local ADS-B receivers. These receivers then share the information over the Internet, where it is collated and made freely available to the public by sites such as planefinder.net and flightradar24.com. The data is archived, and you can look at what planes were where at any time for several years back.

It quickly became apparent that there were only two possible planes. Due to the age of the data, we initially only had an overhead view of the air traffic, and I thought it was a twin engine LAN airlines plane (LA330). However, on January 8, I converted the data to a 3D format viewable in Google Earth, and it seemed that a four engine Iberian jet (IB6830) was a better fit for most of the movement of the object, although LA330 seemed a bit better for the second contrail in the video.

The discussion continued for a few days, with various people weighing in. An airline pilot who flew out of Santiago commented that the planes would not have responded to “hailing” as they would still be on the ATC frequency and not the general traffic frequency. An expert on the camera used to capture the images explained the different fields of view and how the heading indicator was not calibrated. Other people joined in and asked questions, and we worked to answer them. We figured out that cold contrails appear warm because of contrast with the empty sky (which appears supercooled). I did some experiments to verify this.

Object size matches time-stamped positions.
Experiment demonstrating how ice appears “hot” against the sky.

By January 11, after some detailed frame-by frame analysis of the movement of the object, we were able to conclude with a very high degree of confidence that the object was flight IB6830, departing from Santiago airport and leaving two segments of aerodynamic contrails as it climbed. So the event was solved in five days.

It was challenging to communicate this to the UFO community; they had been told that countless experts in every field had verified that they could not figure out what this was after two years of study. So how could a few people on the Internet possibly figure it out in five days?

The answer, I feel, lies with a fundamental problem with panels of experts, namely that you can’t be an expert in the unknown. The list of scientists and experts from the CEFAA website is indeed impressive. There are all kinds of different disciplines there, and they would certainly be able to identify the majority of things we can see in the sky. But in this one case, the needed expertise was simply too specialized to be included in a general panel.

What was needed was someone who understood how persistent aerodynamic contrails formed, where they normally formed relative to the airport, what they looked like when viewed from eighty miles away, and how to view historical time-stamped ADS-B data overlaid on geolocated photographs in Google Earth.

In other words, they needed me on the panel—not that I’m a real expert in aviation. I just happen to have some very specific knowledge and experience in solving this specific type of case. The issue is not really that they should have had an expert like me on the panel. The point I’m making is that it’s impossible to have all the experts you can potentially need on a panel. Any panel is going to be limited in the amount of domain-specific knowledge it has, so eventually a UFO will slip through the gaps.

CEFAA nearly caught this one when they asked the French team IPACO to investigate. The IPACO report is technically quite good. However, they seemed unaware of the existence of ADS-B data and so were unable to find the GPS tracks that would have proven which plane it was. This was compounded by other errors, such as the number of engines, which they thought must have been two but was actually four. They also thought the plane was descending when it was in fact climbing. While the majority of IPACO’s reasoning about it being a plane was sound, their errors led to CEFAA throwing out the entire report.

How do you close the gaps in a panel of experts? Clearly you can’t just add more and more people to the panel. No, the way to close the gaps is to do what IPACO eventually did (without intending to). You ask the Internet.

Asking the Internet (also known as “soliciting public comment,” “asking the public for help,” or “consulting the hive mind”) is a way of casting as wide a net as possible. While you might have a few dozen experts on your panel, they only cover a few dozen broad fields of study and a few narrow ones. By asking the Internet you instantly add several million narrow experts. Of course they are not all going to drop what they are doing and look at your video. You also have the concern you may be inundated with hundreds of suggestions that are unlikely to be helpful (“It’s a Chinook helicopter,” “It’s fake,” “It’s a trans-dimensional being,” and so on).

But the beauty of the Internet is that it helps in both directions. Given enough momentum, it funnels the question toward those who can answer it best. I didn’t look into this until someone else read about it. They thought it might be a contrail and realized I would be a good person to look into that. So the question found its way to me.

Similarly, if a good answer arises from the hive mind of the Internet, then a sufficient number of people will recognize it as the right answer and it will eventually get funneled back to those asking the question. Asking the Internet—crowdsourcing a question—does not mean you have to read every single discussion on the topic. You can just put the question out there, and if the question is interesting enough, and a good answer arises, then it will organically find its way back to you.

The process does not have to be entirely unstructured. With the Skydentify forum on Metabunk, we have something of a hybrid. It’s a public forum so anyone can comment, but there’s a core group involved in most of the discussion, then there’s me keeping everything organized. There was also some crossover with some Facebook groups I’m a member of. I think of this investigation as curated crowdsourcing. I certainly would not have arrived at the conclusive and detailed answer alone, but I think having someone to curate the additional input was a vital part of the process.

Perhaps the key point here is that CEFAA never asked the Internet at any time in the two years of their investigation. They had this lovely piece of evidence in the form of a video with time and GPS coordinates, and they could have simply posted it on the Internet when they got it and someone would have figured it out for them in a matter of days. Instead, they wasted months asking individual experts pointless and misguided questions. But beside the simple lack of diversity of expertise, why did CEFAA get it wrong? Why did they so conclusively eliminate a plane as a possibility?

I think the answer is a fundamental problem with the reasoning they used in the investigation. They set out to consider various possible explanations (bird, plane, weather balloon, meteor, etc.) and then worked to either verify or eliminate each of those explanations. Once everything they could think of was eliminated, they then certified it as a “verified unknown.”

The problem here is the binary nature of eliminating possible explanations. The explanation starts out as possible when first suggested, then is moved over to a hard impossible after some fact is found that contradicts the explanation. For example, the plane explanation was rejected for the following reasons: There were no planes landing at Santiago; there were no planes on the radar; planes can’t dump hot liquid; and the object did not respond to radio contact.

Now from a deduction reasoning perspective, these objections sound reasonable, with unassailable propositional logic:

  1. If it was a plane, it would show up on radar.
  2. It did not show up on radar.
  3. Therefore, it is not a plane.

It seems all very reasonable. The problem is that this reasoning has two rather vague premises. In this case “it did not show on the radar” actually meant “it did not show on the helicopter’s short-range radar or on the ATC area radar where we thought it was.” Secondly, “If it was a plane, it would show on the radar” should really be “If it was a plane where we thought it was, it would show on the radar.”

So the flaw here is an oversimplification of the assumptions, and then treating them as being axiomatic, which then allows you to simply eliminate an explanation. A general term for this is an argument from false premises, a type of logical fallacy where incorrect assumptions can lead to incorrect conclusions. Once you replace the overly simple premises with the more complex accurate premises, you can see that the conclusion does not follow.

Arguably, we also have the fallacy of argument from ignorance, the fallacy where we reach a conclusion because we can’t figure out any alternative to that conclusion. Here the conclusion was that it was a “verified unknown” because they could not figure out what it was. This is then compounded with the argument from authority fallacy, where we are asked to believe the conclusion because the pilots were experienced and the investigators were retired military. But expertise does not prevent you from reaching the wrong conclusion from false premises.

The way I find to avoid arguing from incorrect assumptions is to never eliminate any explanation; instead you should keep a list of all the explanations you’ve come up with and rank them based on the plausibility of what it would take for that explanation to be true. For example, for the plane explanation to be true, it requires the helicopter pilots to have underestimated the distance to the object.

Such a list is a dynamic tool in a curated crowdsourced investigation, allowing for progressive refinement of all the possible explanations without them being prematurely eliminated. Often this will prompt new avenues of investigation. For example, if the requirement was that there be planes in the area, and there were no planes landing, then we might investigate if there were any planes taking off, and see how that would work out.

In many cases, especially those based on more limited photos or video, you are not going to be able to reach a conclusive conclusion. However, a ranked list of possible explanations is still vastly preferable to a false conclusion that the phenomenon is “unexplained.” There is a vast difference between “no definitive explanation” and “no explanation.”

This type of rapid crowdsourced explanation of an “unsolvable” case is not unusual. There are two very different cases that have striking similarities to this one.

First is the case of the Roswell Slides. In 2014, UFO enthusiast Thomas Carey announced he had some Kodachrome slides showing the body of an alien. He claimed to be investigating this with groups of experts, and only poor-quality versions of the images were released. When the main image was released a year later, it took only a matter of hours for others to apply a filter to the image to reveal some text that explained the “alien” as a museum exhibit of a mummified two-year-old human child.

Second is the 2010 case of the Los Angeles Mystery Missile. This occurred over a much shorter timeframe but had much more impressive experts. A mysterious trail was spotted by a news helicopter; they could not figure out what it was (although they thought it was about thirty miles out over the water), so they asked some “experts,” specifically Tom McInerney (a highly decorated retired Air Force Lieutenant General) and Robert Ellsworth (former Deputy Secretary of Defense). They both said it was definitely not a plane and it looked like a missile. So the media went with that explanation. For a few days it was a huge news story—that some foreign power had fired a missile just a few miles from Los Angeles.

Then after a couple of days, several people (including me) identified it as the contrail of a plane that was over 100 miles away (as with the Chilean UFO case, we just used free online flight tracking). Again, there was some difficulty in getting people to believe this due to the high ranking “experts” having identified it as a missile. But after some extensive explanation with illustrations of the radar data matching the images, they eventually accepted it.

The lessons learned here are that groups of experts are no guarantee of success when investigating obscure phenomena, and the smaller the group, the less likely they are to have the exact obscure mix of knowledge that is needed. Experts should not be put on unassailable pedestals, especially with UFOs, since by definition it’s impossible to be an expert on something if you don’t know what it is.

Crowdsourcing has been shown time and again to be the quickest method for identifying an unknown (see, for example, the /r/Whatisthis subreddit). In the cases where the answer does not immediately spring from the Internet hive mind, curating a live list of possible explanations will provide valuable structure and direction to an investigation. Sometimes this ranked list might be what you end up with, but often, with a big enough crowd, the actual explanation can be found.



Further Reading


Viewing all articles
Browse latest Browse all 856

Trending Articles