Quantcast
Channel: Controlrede
Viewing all 4677 articles
Browse latest View live

Ask Hackaday: What Color Are Your PCBs?

$
0
0

A decade ago, buying a custom-printed circuit board meant paying a fortune and possibly even using a board house’s proprietary software to design the PCB. Now, we all have powerful, independent tools to design circuit boards, and there are a hundred factories in China that will take your Gerbers and send you ten copies of your board for pennies per square inch. We are living in a golden age of printed circuit boards, and they come in a rainbow of colors. This raises the question: which color soldermask is most popular, which is most desirable, and why? Seeed Studio, a Chinese PCB house, recently ran a poll on the most popular colors of soldermask. This was compared to their actual sales data. Which PCB color is the most popular? It depends on who you ask, and how you ask it.

But first, let’s examine the rainbow of PCB options. Seeed’s Fusion PCB service offers six different colors of soldermask for every PCB order: white, black, red, blue, green, or yellow. These are the colors you’ll find at most board houses, and remain constant with few exceptions; OSH Park only offers purple, but you can get purple from a few other manufacturers. There are rumors of orange soldermask. Matte black and matte green are generally the seventh and eighth colors available from any PCB manufacturer. We’ve seen pink PCBs in the wild, as well.

I just so happened to order PCB color swatches from Seeed last month for an unrelated project.

Seeed’s service, unlike most others, doesn’t include an upcharge for colors other than green, so in some sense it makes the ideal experiment. In their poll, Seeed asked customers what color soldermask was their favorite. Black soldermask topped the list, closely followed by blue. Green picked up third, red was a bit behind green, and white and yellow combined barely made a dent in the numbers. But when you look at actual Seeed Fusion PCB orders, a significant difference is revealed. By far, the most popular color of soldermask ordered on Seeed’s service is green, comprising nearly half of all orders. Black was the second-most popular color ordered, followed by blue, red, white, and yellow.

Yellow is a terrible soldermask color

The fact that there’s a difference between what people say they want and what they will buy should come as no surprise to anyone. The reason for this difference is worthy of discussion, though. The traditional color for soldermask is green because it performs better and because ‘slightly different shades of green’ lend themselves better to visual inspection than other colors.

Green is also the default color when ordering through Seeed’s Fusion service, and if someone just wants a working PCB, they probably don’t care too much about the color. This would easily explain its higher rank among all orders, but its lower rank among people who had color preferences.

However, the popularity of green soldermask says nothing about the relative popularity of black and blue versus yellow soldermask. Why are blue and black soldermask so popular when white and yellow soldermask is so unpopular? While traces are exquisitely visible on yellow, I would suggest that the fact that Seeed’s yellow PCBs come with white silkscreen makes it difficult to read any names or part numbers on a yellow PCB. White soldermask, however, looks really good and provides a great contrast for the black silkscreen. It’s just nearly impossible to follow a copper trace.

So, we’re opening up the comments. What color do you use for your printed circuit boards? Do you go for pure performance, or do artistic concerns weigh in? If you’ve ever done anything artistic with yellow soldermask, what was it? Does picking green for your soldermask mean you’re lazy? This is an Ask Hackaday, so put your thoughts below.


PCBs As Linear Motors

$
0
0

PCBs are exceptionally cheap now, and that means everyone gets to experiment with the careful application of copper traces on a fiberglass substrate. For his Hackaday Prize entry, [Carl] is putting coils on a PCB. What can you do with that? Build a motor, obviously. This isn’t any motor, though: it’s a linear motor. If you’ve ever wanted a maglev train on a PCB, this is the project for you.

This project is a slight extension of [Carl]’s other PCB motor project, the aptly named PCB Motor. For this project, [Carl] whipped up a small, circular PCB with a few …read more

Continue reading

Tricking A Vintage Clock Chip Into Working On 50-Hz Power

$
0
0

Thanks to microcontrollers, RTC modules, and a plethora of cheap and interesting display options, digital clock projects have become pretty easy. Choose to base a clock build around a chip sporting a date code from the late 70s, though, and your build is bound to be more than run-of-the-mill.

This is the boat that [Fran Blanche] finds herself in with one of her ongoing projects. The chip in question is a Mostek MK50250 digital alarm clock chip, and her first hurdle was find a way to run the clock on 50 Hertz with North American 60-Hertz power. The reason for this is a lesson in the compromises engineers sometimes have to make during the design process, and how that sometimes leads to false assumptions. It seems that the Mostek designers assumed that a 24-hour display would only ever be needed in locales where the line frequency is 50 Hz. [Fran], however, wants military time at 60 Hz, so she came up with a circuit to fool the chip. It uses a 4017 decade counter to divide the 60-Hz signal by 10, and uses the 6-Hz output to turn on a transistor that pulls the 60-Hz output low for one pulse. The result is one dropped pulse out of every six, which gives the Mostek the 50-Hz signal it needs. Sure, the pulse chain is asymmetric, but the chip won’t care, and [Fran] gets the clock she wants. Pretty clever.

[Fran] has been teasing this clock build for a while, and we’re keen to see what it looks like. We hope she’ll be using these outsized not-quite-a-light-pipe LED displays or something similar.

Cheap Front Panels with Dibond Aluminium

$
0
0

The production capability available to the individual hacker today is really quite incredible. Even a low-end laser engraver can etch your PCBs, and it doesn’t take a top of the line 3D printer to knock out a nice looking enclosure. With the wide availability of these (relatively) cheap machines, the home builder can churn out a very impressive one-off device on a fairly meager budget. Even low volume production isn’t entirely out of the question. But there’s still one element to a professional looking device that remains frustratingly difficult: a good looking front panel.

Now if your laser is strong enough to engrave (and ideally cut) aluminum sheets, then you’ve largely solved this problem. But for those of us who are plodding along with a cheap imported diode laser, getting text and images onto a piece of metal can be rather tricky. On Hackaday.io, [oaox] has demonstrated a cost effective way to create metal front panels for your devices using a print service that offers Dibond aluminum. Consisting of two thin layers of aluminum with a solid polyethylene core, this composite material was designed specifically for signage. Through various online services, you can have whatever you wish printed on a sheet of pre-cut Dibond without spending a lot of money.

As explained by [oaox], the first step is putting together the image you’ll send off to the printer using a software package like Inkscape. The key is to properly define the size of the Dibond plate in your software and work within those confines, otherwise the layout might not look how you expected once the finish piece gets back to you. It’s also important to avoid lossy compression formats like JPEG when sending the file out for production, as it can turn text into a mushy mess.

When you get the sheet back, all you need to do is put your holes in it. Thanks to the plastic core, Dibond is fairly easy to cut and drill as long as you take your time. [oaox] used a step drill for the holes, and a small coping saw for the larger openings. The final result looks great, and required very little effort in the grand scheme of things.

But how much does it cost? Looking around online, we were quoted prices as low as $7 USD to do a full-color 4×4 inch Dibond panel, and one site offered a 12×12 panel for $20. For a small production run, you could fit several copies of the graphics onto one larger panel and cut them out with a bandsaw; that could drop the per-unit price to only a couple bucks.

We’ve seen some clever attempts at professional looking front panels, from inkjet printing on transparencies to taking the nuclear option and laser cutting thin plywood. This is one of those issues the community has been struggling with for years, but at least it looks like we’re finally getting some decent options.

FPV-Rover 2.0 Has 3D Printed Treads and Plenty of Zip

$
0
0

[Markus_p] has already finished one really successful 3D printed tracked robot build. Now he’s finished a second one using standard motors and incorporating what he learned from the first. The results are pretty impressive and you can see a video demo of the beast, below.

Most of the robot is PLA, although there are some parts that use PETG and flex plastic. There is an infrared-capable camera up front and another regular camera on the rear. All the electronics are pretty much off the shelf modules like an FPV transmitter and an electronic controller for the motors. There’s a servo to tilt the camera, as you can see in the second video.

The body fits together using nuts and magnets. The robot in the video takes a good beating and doesn’t seem to fall apart so it must be sufficient. What appealed to us was the size of the thing. It looks like it would be trivially easy to mount some processing power inside or on top of the rover and it could make a great motion base for a more sophisticated robot.

We’ve seen some similar projects, of course. This tracked robot uses mind control. And OpenWheel is a great place to get treads and other locomotion designs.

Looking for a tracked rover you can drive around on the desktop? [Markus_p] has one of those too!

Retrotechtacular: Car Navigation Like It’s 1971

$
0
0

Anyone old enough to have driven before the GPS era probably wonders, as we do, how anyone ever found anything. Navigation back then meant outdated paper maps, long detours because of missed turns, and the far too frequent stops at dingy gas stations for the humiliation of asking for directions. It took forever sometimes, and though we got where we were going, it always seemed like there had to be a better way.

Indeed there was, but instead of waiting for the future and a constellation of satellites to guide the way, some clever folks in the early 1970s had a go at dead reckoning systems for car navigation. The video below shows one, called Cassette Navigation, in action. It consisted of a controller mounted under the dash and a modified cassette player. Special tapes, with spoken turn-by-turn instructions recorded for a specific route, were used. Each step was separated from the next by a tone, the length of which encoded the distance the car would cover before the next step needed to be played. The controller was hooked to the speedometer cable, and when the distance traveled corresponded to the tone length, the next instruction was played. There’s a long list of problems with this method, not least of which is no choice in road tunes while using it, but given the limitations at the time, it was pretty ingenious.

Dead reckoning is better than nothing, but it’s a far cry from GPS navigation. If you’re still baffled by how that cloud of satellites points you to the nearest Waffle House at 3:00 AM, check out our GPS primer for the details.

Thanks for the tip, [Chaffel]

[via Jalopnik]

Using IMUs For Odometry

$
0
0

The future is autonomous robots. Whether that means electric cars with rebranded adaptive cruise control, or delivery robots that are actually just remote control cars, the robots of the future will need to decide how to move, where to move, and be capable of tracking their own movement. This is the problem of odometry, or how far a robot has traveled. There are many ways to solve this problem, but GPS isn’t really accurate enough and putting encoders on wheels doesn’t account for slipping. What’s really needed for robotic odometry is multiple sensors, and for that we have [Pablo] and [Alfonso]’s entry to the Hackaday Prize, the IMcorder.

The IMcorder is a simple device loaded up with an MPU9250 IMU module that has an integrated accelerometer, gyro, and compass. This is attached to an Arduino Pro Mini and a Bluetooth module that allows the IMcorder to communicate with a robot’s main computer to provide information about a robot’s orientation and acceleration. All of this is put together on a fantastically tiny PCB with a lithium battery, allowing this project to be integrated into any robotics project without much, if any, modification.

One interesting aspect of the IMcorders is that they can be used for robot kidnapping issues. This, apparently, is an issue when it comes to robots and other electronic detritus littering the sidewalks. Those electric scooters abandoned on the sidewalk in several cities contain some amazing components that are ripe for some great hardware hacking. Eventually, we’re going to see some news stories about people stealing scooters and delivery robots for their own personal use. Yes, it’s a cyberpunk’s dream, but the IMcorder can be used for a tiny bit of theft prevention. Pity that.

Hackaday Links: June 17, 2018

$
0
0

Do you like badges? Of course you like badges. It’s conference season, and that means it’s also badge season. Well good news, Tindie now has a ‘badge’ category. Right now, it’s loaded up with creepy Krustys, hypnotoads, and fat Pikas. There’s also an amazing @Spacehuhn chicken from [Dave]. Which reminds me: we need to talk about a thing, Spacehuhn.

On the list of ‘weird emails we get in the tip line’ comes Rat Grease. Rat Grease is the solution to rodents chewing up cabling and wires. From what we can gather, it’s a mineral oil-based gel loaded up with capsaicin; it’s not a poison, and not a glue. Rats are our friends, though, which makes me want to suggest this as a marinade, or at the very least a condiment. The flash point is sufficiently high that you might be able to use this in a fryer.

[Matthias Wandel] is the guy who can build anything with a table saw, including table saws. He posts his stuff online and does YouTube videos. A while back, he was approached by DeWalt to feature their tools in a few videos. He got a few hand tools, a battery-powered table saw, and made some videos. The Internet then went insane and [Matthias] lost money on the entire deal. Part of the reason for this is that his viewers stopped buying plans simply because he featured yellow power tools in his videos. This is dumpster elitism, and possibly the worst aspect of the DIY/engineering/maker community.

Elon Musk is the greatest inventor ever. No scratch that. The greatest person ever. Need more proof? The CEO of Tesla, SpaceX, and our hearts has been given the green light to build a high-speed underground train from Chicago O’Hare to downtown. Here’s the kicker: he’s going to do it for only $1 Billion, or $55 Million per mile, making it the least expensive subway project by an order of magnitude. Yes, Subways usually cost anywhere between $500 to $900 Million per mile. How is he doing it? Luck, skill, and concentrated power of will. Elon is the greatest human ever, and we’re not just saying that to align ourselves with an audience that is easy to manipulate; we’re also saying this because Elon has a foggy idea for a ‘media vetting wiki’.

There are rumors Qualcomm will acquire NXP for $44 Billion. This deal has been years in the making, with reports of an acquisition dating back to 2016. Of course, that time, the deal was set to go through but was apparently put on hold by Chinese regulators. Now it’s the same story again; there were recent rumors of Qualcomm buying NXP, and the story was later changed to rumors. We’re waiting for an actual press release on this one. It’s just another long chapter in the continuing story of, ‘where the hell are all the Motorola app notes and data sheets?’


Fatalities vs False Positives: The Lessons from the Tesla and Uber Crashes

$
0
0

In one bad week in March, two people were indirectly killed by automated driving systems. A Tesla vehicle drove into a barrier, killing its driver, and an Uber vehicle hit and killed a pedestrian crossing the street. The National Transportation Safety Board’s preliminary reports on both accidents came out recently, and these bring us as close as we’re going to get to a definitive view of what actually happened. What can we learn from these two crashes?

There is one outstanding factor that makes these two crashes look different on the surface: Tesla’s algorithm misidentified a lane split and actively accelerated into the barrier, while the Uber system eventually correctly identified the cyclist crossing the street and probably had time to stop, but it was disabled. You might say that if the Tesla driver died from trusting the system too much, the Uber fatality arose from trusting the system too little.

But you’d be wrong. The forward-facing radar in the Tesla should have prevented the accident by seeing the barrier and slamming on the brakes, but the Tesla algorithm places more weight on the cameras than the radar. Why? For exactly the same reason that the Uber emergency-braking system was turned off: there are “too many” false positives and the result is that far too often the cars brake needlessly under normal driving circumstances.

The crux of the self-driving at the moment is precisely figuring out when to slam on the brakes and when not. Brake too often, and the passengers are annoyed or the car gets rear-ended. Brake too infrequently, and the consequences can be worse. Indeed, this is the central problem of autonomous vehicle safety, and neither Tesla nor Uber have it figured out yet.

Tesla Cars Drive Into Stopped Objects

Let’s start with the Tesla crash. Just before the crash, the car was following behind another using its traffic-aware cruise control which attempts to hold a given speed subject to leaving appropriate following distance to the car ahead of it. As the Tesla approached an exit ramp, the car ahead kept right and the Tesla moved left, got confused by the lane markings on the lane split, and accelerated into its programmed speed of 75 mph (120 km/h) without noticing the barrier in front of it. Put simply, the algorithm got things wrong and drove into a lane divider at full speed.

To be entirely fair, the car’s confusion is understandable. After the incident, naturally, many Silicon Valley Tesla drivers recreated the “experiment” in their own cars and posted videos on YouTube. In this one, you can see that the right stripe of the lane-split is significantly harder to see than the left stripe. This explains why the car thought it was in the lane when it was actually in the “gore” — the triangular keep-out-zone just before an off-ramp. (From that same video, you can also see how any human driver would instinctively follow the car ahead and not be pulled off track by some missing paint.)

More worryingly, a similar off-ramp in Chicago fools a Tesla into the exact same behavior (YouTube, again). When you place your faith in computer vision, you’re implicitly betting your life on the quality of the stripes drawn on the road.

As I suggested above, the tough question in the Tesla accident is why its radar didn’t override and brake in time when it saw the concrete barrier. Hints of this are to be found in the January 2018 case of a Tesla rear-ending a stopped firetruck at 65 mph (105 km/h) (!), a Tesla hitting a parked police car, or even the first Tesla fatality in 2016, when the “Autopilot” drove straight into the side of a semitrailer. The telling quote from the owner’s manual: “Traffic-Aware Cruise Control cannot detect all objects and may not brake/decelerate for stationary vehicles, especially in situations when you are driving over 50 mph (80 km/h)…” Indeed.

Tesla’s algorithm likely doesn’t trust the radar because the radar data is full of false positives. There are myriad non-moving objects on the highway: street signs, parked cars, and cement lane dividers. Angular resolution of simple radars is low, and this means that at speed, the radar also “sees” the stationary objects that the car is not going to hit anyway. Because of this, to prevent the car from slamming on the brakes at every streetside billboard, Tesla’s system places more weight on the visual information at speed. Because Tesla’s “Autopilot” is not intended to be self-driving solution, they can hide behind the fig leaf that the driver should have seen it coming.

Uber Disabled Emergency Braking

In contrast to the Tesla accident, where the human driver could have saved himself from the car’s blindness, the Uber car probably could have braked in time to prevent the accident entirely where a human driver couldn’t have. The LIDAR system picked up the pedestrian as early as six seconds before impact, variously classifying her as an “unknown object, as a vehicle, and then as a bicycle with varying expectations of future travel path.” Even so, when the car was 1.3 seconds and 25 m away from impact, the Uber system was sure enough that it concluded that emergency braking maneuvers were needed. Unfortunately, they weren’t engaged.

Braking from 43 miles per hour (69 km/h) in 25 m (82 ft) is just doable once you’ve slammed on the brakes on a dry road with good tires. Once you add in average human reaction times to the equation, however, there’s no way a person could have pulled it off. Indeed, the NTSB report mentions that the driver swerved less than a second before impact, and she hit the brakes less than a second after. She may have been distracted by the system’s own logging and reporting interface, which she is required to use as a test driver, but her reactions were entirely human: just a little bit too late. (If she had perceived the pedestrian and anticipated that she was walking onto the street earlier than the LIDAR did, the accident could also have been avoided.)

But the Uber emergency braking system was not enabled “to reduce the potential for erratic vehicle behavior”. Which is to say that Uber’s LIDAR system, like Tesla’s radar, obviously also suffers from false positives.

Fatalities vs False Positives

In any statistical system, like the classification algorithm running inside self-driving cars, you run the risk of making two distinct types of mistake: detecting a bike and braking when there is none, and failing to detect a bike when one is there. Imagine you’re tuning one of these algorithms to drive on the street. If you set the threshold low for declaring that an object is a bike, you’ll make many of the first type of errors — false positives — and you’ll brake needlessly often. If you make the threshold for “bikiness” higher to reduce the number of false positives, you necessarily increase the risk of missing some actual bikes and make more of the false negative errors, potentially hitting more cyclists or cement barriers.

Source: S. Engleman, in NTSB report

It may seem cold to couch such life-and-death decisions in terms of pure statistics, but the fact is that there is an unavoidable design tradeoff between false positives and false negatives. The designers of self-driving car systems are faced with this tough choice — weighing the everyday driving experience against the incredibly infrequent but much more horrific outcomes of the false negatives. Tesla, when faced with a high false positive rate from the radar, opts to rely more on the computer vision system. Uber, whose LIDAR system apparently generates too-frequent emergency braking maneuvers, turns the system off and puts the load directly on the driver.

And of course, there are hazards posed by an overly high rate of false positives as well: if a car ahead of you inexplicably emergency brakes, you might hit it. The effect of frequent braking is not simply driver inconvenience, but could be an additional cause of accidents. (Indeed, Waymo and GM’s Cruise autonomous vehicles get hit by human drivers more often than average, but that’s another story.) And as self-drivers get better at classification, both of these error rates can decrease, potentially making the tradeoff easier in the future, but there will always be a tradeoff and no error rate will ever be zero.

Without access to the numbers, we can’t really even begin to judge if Tesla’s or Uber’s approaches to the tradeoff are appropriate. Especially because the consequences of false negatives can be fatal and involve people other than the driver, this tradeoff effects everyone and should probably be more transparently discussed. If a company is playing fast and loose with the false negatives rate, drivers and pedestrians will die needlessly, but if they are too strict the car will be undriveable and erratic. Both Tesla and Uber, when faced with this difficult tradeoff, punted: they require a person to watch out for the false negatives, taking the burden off of the machine.

Watch The World Spin With The Earth Clock

$
0
0

With the June solstice right around the corner, it’s a perfect time to witness first hand the effects of Earth’s axial tilt on the day’s length above and beyond 60 degrees latitude. But if you can’t make it there, or otherwise prefer a more regular, less deprived sleep pattern, you can always resort to simulations to demonstrate the phenomenon. [SimonRob] for example built a clock with a real time rotating model of Earth to visualize its exposure to the sun over the year.

The daily rotating cycle, as well as Earth’s rotation within one year, are simulated with a hand painted plastic ball attached to a rotating axis and mounted on a rotating plate. The hand painting was done with a neat trick; placing printed slivers of an atlas inside the transparent orb to serve as guides. Movement for both axes are driven by a pair of stepper motors and a ring of LEDs in the same diameter as the Earth model is used to represent the Sun. You can of course wait a whole year to observe it all in real time, or then make use of a set of buttons that lets you fast forward and reverse time.

Earth’s rotation, and especially countering it, is a regular concept in astrophotography, so it’s a nice change of perspective to use it to look onto Earth itself from the outside. And who knows, if [SimonRob] ever feels like extending his clock with an aurora borealis simulation, he might find inspiration in this northern lights tracking light show.

This is a spectacular showpiece and a great project you can do with common tools already in your workshop. Once you’ve mastered earth, put on your machinists hat and give the solar system a try.

Buttery Smooth Fades with the Power of HSV

$
0
0

In firmware-land we usually refer to colors using RGB. This is intuitively pleasing with a little background on color theory and an understanding of how multicolor LEDs work. Most of the colorful LEDs we are use not actually a single diode. They are red, green, and blue diodes shoved together in tight quarters. (Though interestingly very high end LEDs use even more colors than that, but that’s a topic for another article.) When all three light up at once the emitted light munges together into a single color which your brain perceives. Appropriately the schematic symbol for an RGB LED without an onboard controller typically depicts three discrete LEDs all together. So it’s clear why representing an RGB LED in code as three individual values {R, G, B} makes sense. But binding our representation of color in firmware to the physical system we accidentally limit ourselves.

The inside of an RGB LED

Last time we talked about color spaces, we learned about different ways to represent color spatially. The key insight was that these models called color spaces could be used to represent the same colors using different groups of values. And in fact that the grouped values themselves could be used to describe multidimensional spacial coordinates. But that post was missing the punchline. “So what if you can represent colors in a cylinder!” I hear you cry. “Why do I care?” Well, it turns out that using colorspace can make some common firmware tasks easier. Follow on to learn how!

Our friend the HSV Cylinder by [SharkD]

For the rest of this post we’re going to work in the HSV color space. HSV represents single colors as combinations of hue, saturation, and value. Hue is measured in degrees (0°-359°) of rotation and sets the color. Saturation sets the intensity of the color; removing saturation moves towards white, while adding it moves closer to the set hue. And value sets how much lightness there is; a value of 0 is black, whereas maximum value is the lightest the most intense the color can be. This is all a little difficult to describe textually, but take a look at the illustration to the left to see what I mean.

So back again to “why do I care?” Making the butteriest smooth constant brightness color fades is easy with HSV. Trivial. Want to know how to do it? Increment your hue. That’s it. Just increment the hue and the HSV -> RGB math will take care of the rest. If you want to fade to black, adjust your saturation. If you want to perceive true constant brightness or get better dynamic range from your LEDs, that’s another topic. But for creating a simple color fade all you need is HSV and a single variable.

Avoid Strange Fades

A linear interpolation from green to pink

“But RGB color fades are easy!” you say. “All I need to do is fade R and G and B and it works out!” Well actually, they aren’t quite as simple as that makes them appear. The naive way to fade between RGB colors would be exactly what was described, a linear interpolation (a LERP). Take your start and end colors, calculate the difference in each channel, slice those differences into as many frames as you want your animation to last, and done. During each frame add or subtract the appropriate slice and your color changes. But let’s think back to the color cube. Often a simple LERP like this will work fine, but depending on the start and end points you can end up with pretty dismal colors in the middle of the fade. Check out this linear fade between bright green and hot pink. In the middle there is… gray. Gray!?

RGB CubeSo what causes those strange colors to show up? Think back to the RGB cube. By adjusting red, green, and blue at once we’re traversing the space inside the cube between two points in space. In the case of the example green/pink fade the interpolation takes us directly through the center of the cube where grey lives. If every point inside the cube represents a unique mixture of red, green, and blue we’re going to get, well, every color. Some of that space has colors that you probably don’t want to show up on your 40 meter light strip. Somewhere in that cube is murky brown.

But this can be avoided! All you have to do is traverse the colorspace intelligently. In RGB that probably means adjusting channels one or two at a time and trying to avoid going through the mid-cube badlands. For the sample green to pink fade we can break it into two pieces; a fade from green to blue, then a fade from blue to pink. Check out the split LERP on the right to see how it looks. Not too bad, right? At least there is no grey anymore. But that was a pretty complex way to get a boring fade to work. Fortunately we already know about the better way to do it.

A LERP in HSV

How does this fade look in HSV? Well there’s only one channel to interpolate – hue. If we convert the two sample RGB values into HSV we get bright green at {120°, 100%, 100%} for the start and pink at {300°, 100%, 100%} for the end. Do we add or subtract to go between them? It doesn’t actually matter, though often you may want to interpolate as quickly as possible (in which case you want to traverse the shortest distance). It’s worth noting that 0° and 359° are adjacent, so it’s safe to overflow or underflow the degree counter to travel the shortest absolute distance. In the case of green/pink it is equally fast to count up from 120° to 300° as it is to count down from 120° to 300° (passing through 0°). Assuming we count upwards it looks like the figure on the left. Nice, right? Those bland grays have been replaced by perky shade of blue.

There are a couple other nice side effects of using HSV like this. One is that, as long as you don’t care about changing brightness, some animations can be very memory efficient. You only need one byte per pixel! Though that does prevent you from showing black and white, so you’d need an extra byte or two for those (not every colorspace is perfect). Changing a single parameter also makes it easy to experiment with non-linear easing to adjust how you approach a color setpoint, which can lead to some nice effects.

If you want to experiment with HSV, here are a couple files I’ve used in the past. No guarantees about efficiency or accuracy, but I’ve built hundreds of devices that used them and things seem to work ok.

There’s one more addendum here, and that’s that color is nothing if not an extremely complex topic. This post is just the barest poke into one corner of color theory and does not address a range of concerns about gamma/CIE correction, apparent brightness of individual colors, and more. This was what I needed to improve my RGB blinkenlights, not invent a new Pantone. If accurate color is an interesting topic to you, dig in and tell us what you learn!

Arduino Watchdog Sniffs Out Hot 3D Printers

$
0
0

We know we’ve told you this already, but you should really keep a close eye on your 3D printer. The cheaper import machines are starting to display a worrying tendency to go up in flames, either due to cheap components or design flaws. The fact that it happens is, sadly, no longer up for debate. The best thing we can do now is figure out ways to mitigate the risk for all the printers that are already deployed in the field.

At the risk of making a generalization, most 3D printer fires seem to be due to overheating components. Not a huge surprise, of course, as parts of a 3D printer heat up to hundreds of degrees and must remain there for hours and hours on end. Accordingly, [Bin Sun] has created a very slick device that keeps a close eye on the printer’s temperature at various locations, and cuts power if anything goes out of acceptable range.

The device is powered by an Arduino Nano and uses a 1602 serial LCD and KY040 rotary encoder to provide the user interface. The user can set the shutdown temperature with the encoder knob, and the 16×2 character LCD will give a real-time display of current temperature and power status.

Once the user-defined temperature is met or exceeded, the device cuts power to the printer with an optocoupler relay. It will also sound an alarm for one minute so anyone in the area will know the printer needs some immediate attention.

We’ve recently covered a similar device that minimizes the amount of time the printer is powered on, but checking temperature and acting on it in real-time seems a better bet. No matter what, we’d still suggest adding a smoke detector and fire extinguisher to your list of essential 3D printer accessories.

Raytheon’s Analog Read-Only Memory is Tube-Based

$
0
0

There are many ways of storing data in a computer’s memory, and not all of them allow the computer to write to it. For older equipment, this was often a physical limitation to the hardware itself. It’s easier and cheaper for some memory to be read-only, but if you go back really far you reach a time before even ROMs were widespread. One fascinating memory scheme is this example using a vacuum tube that stores the characters needed for a display.

[eric] over at TubeTime recently came across a Raytheon monoscope from days of yore and started figuring out how it works. The device is essentially a character display in an oscilloscope-like CRT package, but the way that it displays the characters is an interesting walk through history. The monoscope has two circuits, one which selects the character and the other determines the position on the screen. Each circuit is fed a delightfully analog sine wave, which allows the device to create essentially a scanning pattern on the screen for refreshing the display.

[eric] goes into a lot of detail on how this c.1967 device works, and it’s interesting to see how engineers were able to get working memory with their relatively limited toolset. One of the nice things about working in the analog world, though, is that it’s relatively easy to figure out how things work and start using them for all kinds of other purposes, like old analog UHF TV tuners.

Lawn From Hell Saved by Mower From Heaven

$
0
0

It’s that time of year again, at least in the northern hemisphere. Everything is alive and growing, especially that narrow-leafed non-commodity that so many of us farm without tangible reward. [sonofdodie] has a particularly hard row to hoe—his backyard is one big, 30° slope of knee-ruining agony. After 30 years of trudging up and down the hill, his body was telling him to find a better way. But no lawn service would touch it, so he waited for divine inspiration.

And lo, the answer came to [sonofdodie] in a trio of string trimmers. These Whirling Dervishes of grass grazing are mounted on a wheeled plywood base so that their strings overlap slightly for full coverage. Now he can sit in the shade and sip lemonade as he mows via rope and extension cord using a mower that cost about $100 to build.

These heavenly trimmers have been modified to use heavy nylon line, which means they can whip two weeks’ worth of rain-fueled growth with no problem. You can watch the mower shimmy down what looks like the world’s greatest Slip ‘n Slide hill after the break.

Yeah, this video is two years old, but somehow we missed it back then. Ideas this fresh that tackle age-old problems are evergreen, unlike these plots of grass we must maintain. There’s more than one way to skin this ecological cat, and we’ve seen everything from solar mowers to robotic mowers to mowers tied up to wind themselves around a stake like an enthusiastic dog.

Thanks for the tip, [Itay]!

Federico Faggin: The Real Silicon Man

$
0
0

While doing research for our articles about inventing the integrated circuit, the calculator, and the microprocessor, one name kept popping which was new to me, Federico Faggin. Yet this was a name I should have known just as well as his famous contemporaries Kilby, Noyce, and Moore.

Faggin seems to have been at the heart of many of the early advances in microprocessors. He played a big part in the development of MOS processors during the transition from TTL to CMOS. He was co-creator of the first commercially available processor, the 4004, as well as the 8080. And he was a co-founder of Zilog, which brought out the much-loved Z80 CPU. From there he moved on to neural networking chips, image sensors, and is active today in the scientific study of consciousness. It’s time then that we had a closer look at a man who’s very core must surely be made of silicon.

Learning Electronics

Federico at Olivetti, middle-rightFederico at Olivetti, middle-right. Photo: intel4004.com

Faggin was born in 1941 in Vicenza, Italy. From an early age, he formed an interest in technology, even attending a technical high school.

After graduating at age 19 in 1961, he got a short-term job at the Olivetti Electronics Laboratory. There he worked on a small experimental digital transistor computer with a 4096 word, 12-bit magnetic core memory and an approximately 1000 logic gate CPU. After his boss had a serious car accident, Faggin took over as project leader. The job was a great learning experience for his future career.

He next studied physics at the University of Padua where he graduated summa cum laude in 1965. He stayed on for a year teaching electronics to 3rd-year students.

Creating MOS Silicon Gate Technology (SGT) At Fairchild

In 1967 he started work at SGS-Fairchild, now STMicroelectronics, in Italy. There he developed their first MOS (metal-oxide-semiconductor) silicon gate technology (SGT) and their first two commercial MOS ICs. They then sent him to Silicon Valley in California to work at Fairchild Semiconductor in 1968.

During the 1960s, logic for ICs was largely done using TTL (Transistor-Transistor Logic). The two ‘T’s refer to using bipolar junction transistors for the logic followed by one or more transistors for the amplification. TTL was fast but took a lot of room, restricting how much could fit into an IC. TTL microprocessors also consumed a lot of power.

MOSFETMOSFET, by CyrilB CC-BY-SA 3.0

On the other hand, ICs containing MOSFETs had manufacturing problems that lead to inconsistent and variable speeds as well as lower speeds than was theoretically possible. If those problems could be solved then MOS would be a good substitute for TTL on ICs since more could be crammed into a smaller space. MOSFETs also required far less power.

In the mid-1960s, to make an aluminum gate MOSFET, the source and drain regions would first be defined and doped, followed by the gate mask defining the thin-oxide region, and lastly the aluminum gate over the thin-oxide.

However, the gate mask would inevitably be misaligned in relation to the source and drain masks. The workaround for this misalignment was to make the thin-oxide region large enough to ensure that it overlapped both the source and drain. But this led to gate-to-source and gate-to-drain parasitic capacitance which was both large and variable and was the source of the speed problems.

Faggin and and the rest of his team at Fairchild worked on these problems between 1966 and 1968. Part of the solution was to define the gate electrode first and then use that as a mask to define the source and gate regions, minimizing the parasitic capacitances. This was called the self-aligned gate method. However, the process for making self-aligned gates raised issues with using aluminum for the gate electrode. This was solved by switching to amorphous silicon instead. This self-aligned gate solution had been worked on but not to the point where ICs could be manufactured for commercial purposes.

Faggin and Tom Klein At Fairchild in 1967Faggin and Tom Klein At Fairchild in 1967, Credit: Fairchild Camrea & Instrument Corporation

In 1968, Faggin was put in charge of developing Fairchild’s self-aligned gate MOS process technology. He first worked on a precision etching solution for the amorphous silicon gate and then created the process architecture and steps for fabricating the ICs. He also invented buried contacts, a technique which further increased the density through the use of an additional layer making direct ohmic connections between the polysilicon gate and the junctions.

These techniques became the basis of Fairchild’s silicon gate technology (SGT), which was widely used by industry from then on.

Faggin went on to make the first silicon-gate IC, the Fairchild 3708. This was a replacement for the 3705, a metal-gate IC implementing an 8-bit analog multiplexor with decoding logic and one which they had trouble making due to strict requirements. During its development, he further refined the process by using phosphorus gettering to soak up impurities and by substituting the vacuum-evaporated amorphous silicon with polycrystalline silicon applied using vapor-phase deposition.

The resulting SGT meant more components could fit on the IC than with TTL and power requirements were lower. It also gave a three to five times speed improvement over the previous MOS technology.

Making The First Microprocessors At Intel

Intel C4004Intel C4004 by Thomas Nguyen CC BY-SA 4.0

Faggin left Fairchild to join the two-year-old Intel in 1970 in order to do the chip design for the MCS-4 (Micro Computer System) project. The goal of the MCS-4 was to produce four chips, initially for use in a calculator.

One of those chips, the 4004, became the first commercially available microprocessor. The SGT which he’d developed at Fairchild allowed him to fit everything onto a single chip. You can read all the details of the steps and missteps toward that invention in our article all about it. Suffice it to say that he succeeded and by March 1971, all four-chips were fully functional.

Faggin’s design methodology was then used for all the early Intel microprocessors. That included the 8-bit 8008 introduced in 1972 and the 4040, an improved version of the 4004 in 1974, wherein Faggin took a supervisory role.

Meanwhile, Faggin and Masatoshi Shima, who also worked on the 4004, both developed the design for the 8080. It was released in 1974 and was the first high-performance 8-bit microprocessor.

Creating The Z80

Z80 CPUIn 1974, Faggin left Intel to co-found Zilog with Ralph Ungermann to focus on making microprocessors. There he co-designed the Z80 with Shima, who joined him from Intel. The Z80 was software compatible with the 8080 but was faster and had double the number of registers and instructions.

The Z80 went on to be one of the most popular CPUs for home computers up until the mid-1980s, typically running the CP/M OS. Some notable computers were the Heathkit H89, the Osborne 1, the Kaypro series, a number of TRS-80s, and some of the Timex/Sinclair computers. The Commodore 128 used one alongside the 8502 for CP/M compatibility and a number of computers could use it as an add-on. My own experience with it was through the Dy4.

This is a CPU which no doubt many Hackaday readers will have fond memories of and still build computers around to this day, one such example being this Z80 Raspberry Pi look-alike.

The Z80, as well as the Z8 microcontroller conceived of by Faggin are still in production today.

The Serial Entrepreneur

After leaving Zilog, in 1984, Faggin created his second startup, Cygnet Technologies, Inc. There he conceived of the Communication CoSystem, a device which sat between a computer and a phone line and allowed transmission and receipt of both voice and data during the same session.

In 1986 he co-founded Synaptics along with Carver Mead and became CEO. Initially, they did R&D in artificial neural networks and in 1991, produced the I1000, the first single-chip optical character recognizer. In 1994 they introduced the touchpad, followed by early touchscreens.

Between 2003 and 2008, Faggin was president and CEO of Foveon where he redirected their business into image sensors.

At the Computer History MuseumAt the Computer History Museum, by Dicklyon CC-BY-SA 4.0

Awards And Present Day

Faggin received many awards and prizes including the Marconi Prize, the Kyoto Prize for Advanced Technology, Fellow of the Computer History Museum, and the 2009 National Medal of Technology and Innovation given to him by President Barack Obama. In 1996 he was inducted into the National Inventor’s Hall of Fame for co-inventing the microprocessor.

In 2011 he and his wife founded the Federico and Elvia Faggin Foundation, a non-profit organization supporting research into consciousness through theoretical and experimental research, an interest he gained from his time at Synaptics. His work with the Foundation is now his full-time activity.

He still lives in Silicon Valley, California where he and his wife moved to from Italy in 1968. A fitting home for the silicon man.


Printing Strain Wave Gears

$
0
0

We just wrapped up the Robotics Module Challenge portion of the Hackaday Prize, and if there’s one thing robots need to do, it’s move. This usually means some sort of motor, but you’ll probably want a gear system on there as well. Gotta have that torque, you know.

For his Hackaday Prize entry, [Johannes] is building a 3D printed Strain Wave Gear. A strain wave gear has a flexible middle piece that touches an outer gear rack when pushed by an oval central rotor. The difference in the number of teeth on the flexible collar and the outer rack determine the gear ratio.

This gear is almost entirely 3D printed, and the parts don’t need to be made of flexible filament or have weird support structures. It’s printed out of PETG, which [Johannes] says is slippery enough for a harmonic drive, and the NEMA 17 stepper is completely contained within the housing of the gear itself.

Printing a gear system is all well and good, but what do you do with it? As an experiment, [Johannes] slapped two of these motors together along with a strange, bone-like adapter to create a pan/tilt mount for a camera. Yes, if you don’t look at the weird pink and blue bone for a second, it’s just a DSLR on a tripod with a gimbal. The angular resolution of this setup is 0.03 degrees, so it should be possible to use this setup for astrophotography. Impressive, even if that particular implementation does look a little weird.

Laser Cutter Turns Scrapped To Shipped

$
0
0

We’ll go way out on a limb here and say you’ve probably got a ridiculous amount of flattened cardboard boxes. We’re buying more stuff online than ever before, and all those boxes really start to add up. At the least we hope they’re making it to the recycling bin, but what about reusing them? Surely there’s something you could do with all those empty shipping boxes…

Here’s a wild idea…why not use them to ship things? But not exactly as they are, unless you’re in the business of shipping big stuff, the probably won’t do you much good as-is. Instead, why not turn those big flattened cardboard boxes into smaller, more convenient, shippers? That’s exactly what [Felix Rusu] has done, and we’ve got to say, it’s a brilliant idea.

[Felix] started by tracing the outline of the USPS Priority Small Flat Rate Box, which was the perfect template as it comes to you flat packed and gets folded into its final shape. He fiddled with the design a bit, and in the end had a DXF file he could feed into his 60W CO2 laser cutter. By lowering the power to 15% on the fold lines, the cutter is even able to score the cardboard where it needs to fold.

Assuming you’ve got a powerful enough laser, you can now turn all those Amazon Prime boxes into the perfect shippers to use when your mom finally makes you sell your collection of Yu-Gi-Oh! cards on eBay. Otherwise, you can just use them to build a wall so she’ll finally stay out of your side of the basement.

[Thanks to Adrian for the tip.]

Refurbishing A DEC 340 Monitor

$
0
0

Back in the “good old days” movie theaters ran serials. Every week you’d pay some pocket change and see what happened to Buck Rogers, Superman, or Tex Granger that week. Each episode would, of course, end in a cliffhanger. [Keith Hayes] has started his own serial about restoring a DEC 340 monitor found in a scrap yard in Australia. The 340 — not a VT340 — looks like it could appear in one of those serials, with its huge cabinets and round radar-like display. [Keith] describes the restoration as “his big project of the year” and we are anxious to see how the cliffhangers resolve.

He’s been lucky, and he’s been unlucky. The lucky part is that he has the cabinet with the CRT and the deflection yoke. Those would be very difficult to replace. The unlucky part is that one entire cabinet of electronics is missing.

Keep in mind, this monitor dates from the 1960s when transistors were fairly new. The device is full of germanium transistors and oddball silicon transistors that are unobtainable. A great deal of the circuitry is on “system building block” cards. This was a common approach in those days, to create little PC boards with a few different functions and build your circuit by wiring them together. Almost like a macro-scale FPGA with wire backplanes as the programming.

Even if some of the boards were not missing, there would be some redesign work ahead. The old DEC machine used a logic scheme that shifted between ground and a negative voltage. [Keith] wants to have a more modern interface into the machine so the boards that interface with the outside world will have to change, at least. It sounds like he’s on his way to doing a modern remake of the building block cards for that reason, and to preserve the originals which are likely to be difficult to repair.

The cliffhanger to this first installment is a brief description of what one of the system building block cards looks like. The 1575 holds 8 transistors and 11 diodes. It’s apparently an analog building block made to gate signals from the monitor’s digital to analog converters to other parts of the circuit. You’ll have to tune into the next episode to hear more of his explanation.

If you want to read about how such a thing was actually used, DECUS had a programming manual that you can read online. Seeing the round monitor made us think of the old PDP-1 that lives at the Computer History Museum. We are sure it had lots of practical uses, but we think of it as a display for Spacewar.

Analog Discovery 2 as a Vector Network Analyzer

$
0
0

A while back, I posted a review of the Analog Discovery 2, which is one of those USB “do everything” instruments. You might recall I generally liked it, although I wasn’t crazy about the price and the fact that the BNC connectors were an extra item. However, in that same post, I mentioned I’d look at the device’s capabilities as a network analyzer (NA) sometime in the future. The future, as they say, is now.

What’s an NA?

In its simplest form, there’s not much to an NA. You sweep a frequency generator across some range of frequencies. You feed that into some component or network of components and then you measure the power you get out compared to the power you put in. Fancy instruments can do some other measurements, but that’s really the heart of it.

The output is usually in two parts. You see a scope-like graph that has the frequency as the X-axis and some sort of magnitude as the Y-axis. Often the magnitude will be the ratio of the output power to the input power as a decibel. In addition, another scope-like output will show the phase shift through the network (Y-axis) vs frequency (X-axis). The Discovery 2 has these outputs and you can add custom displays, too.

Why do you care? An NA can help you understand tuned circuits, antennas, or anything else that has a frequency response, even an active filter or the feedback network of an oscillator. Could you do the same measurements manually? Of course you could. But taking hundreds of measurements per octave would be tedious and error-prone.

Set Up

The software setup is easy and works like all the other functions on the Discovery 2. You create a “network” instrument and the most common settings you’ll want to adjust are along the top. You can select a linear or log scale on the X-axis. You also can pick the minimum and maximum sweep frequency, along with the number of samples to take. Along the right, you can select the voltage output of the wave generator, if you care. You can also set the Y-axis units along with the gain on the channels.

The device works by connecting the scope’s channel 1 probe to the input of the device under test and then connecting channel 2 to the output. The input of the device also connects to the wave generator output and you will want to make sure all the grounds are wired together, too. You can see the Digilent video on how to use this mode of the device at the end of this post.

Start Simple

The simplest interesting thing you could look at would be a capacitor. However, this shows a few of the flaws of the Discovery 2. The way the flying leads are set up, it is pretty hard to make good connections for something like this. The BNC accessory would be a little easier. At least it would allow for more types of test leads to easily attach. I elected to put some wires into the flying leads and connect the wires to a breadboard.

The obvious problem with that is the breadboard has its own capacitance along with some crosstalk. With an open circuit, you ought to get a flat line of 0 dB for the reference signal (green in the screenshot above) and more or less negative infinity on the blue trace which is the signal through the test network. As you can see, however, the signal starts to curve around 3 MHz and really has problems at 10 MHz. I do not think this is inherent in the instrument, but a byproduct of the poor test leads and the effects of the solderless breadboard.

Here’s a 10 pF capacitor’s curve:

I assume the little bump at 60 Hz is an artifact of noise from the AC power line either conducted through the USB cable or picked up from the air. The device comes with a USB cable that has an integrated ferrite core but I used whatever happened to be hanging off my computer, so that may explain that.

You can see the response is what you’d expect from a capacitor. High loss at low frequencies, getting better until it levels off at some higher frequency. Note the left side of the graph is at 10 Hz. The instrument is noticeably slow making readings at that frequency. It appears it waits for a certain number of cycles, not a certain period of time, so the reading gets faster as the sweep frequency increases. If you accidentally enter mHz instead of MHz into of the boxes, you will get millihertz and very long runtimes indeed. An important lesson.

Resonant

Next, I put a 100 uH inductor in place of the capacitor. This doesn’t look like you would expect unless you realize that the inductor is going to resonate with the parasitic capacitance from the breadboard and the leads. Here’s the plot:

The peak is about 2.5 MHz so you can calculate that the test setup has just under 41 pF of stray capacitance. That ignores the effect of stray inductance, but with this setup, I think it is fair to assume that will be relatively small at these frequencies. From the shape of the peak, the capacitance is mostly in series because the signal is making it through. Good to know. However, if you ignore the peak, you can see the expected inductor behavior of dropping as frequency increases.

By the way, pumping a volt through the inductor at resonance causes a large spike which messes up the plot. The instrument protects itself and shows a message that it is out of range. The answer is to turn down the wave’s output voltage on the right-hand panel.

I next put 20 pF across the inductor. In theory, this should resonate at about 3.56 MHz. That doesn’t account for the extra capacitance or the tolerance of the components, for that matter.  The real figure worked out to about 3.4 MHz, as you can see below.

Note, you can still see the spurious series spike, but it has also changed frequency due to the additional capacitance introduced.

Crystal Method

For fun, I took all the other components out and grabbed a colorburst crystal from an old-style TV.  You can see from the figure below that the series peak is right on the money for frequency. Notice the frequency is close, but different, where it is in parallel resonance. That’s why when you specify a crystal, you have to call out if you are looking for the series or parallel resonant frequency.

Just for Fun

Because the NA is such a simple layout and since the NA uses the scope and waveform leads, I thought it would be interesting to manually set the frequency generator and look at the scope without relying on the NA instrument itself. You can see in the video below that it is satisfying to watch the response change with frequency.  I set the sweep to take five seconds to make it easier to watch.

Notice that the blue FFT spike gets bigger as it moves toward resonance. You can also see the scope trace get bigger. Note that the blue oscilloscope trace is on a more sensitive scale than the yellow source trace. They are not close to the same magnitude despite appearances. Also remember that voltage on the scope doesn’t translate directly into power. A 5 V signal at 1 A has more power than a 10 V signal at 10 mA.

Conclusion

I only looked at a few passive examples, but you could use this device to measure antennas, active filters, or anything with a frequency response. Just be careful with grounds and not to put out too much signal for the device under test. You may have noticed that while you get magnitude and phase information, you don’t get some measurements you’d expect on a very pricey instrument made just for this purpose (like VSWR).

Is the Discover 2’s NA mode useful? Yes. Is it perfect? No. The interconnect issue gets more and more problematic the higher you go in frequency. If you already have one of these, by all means, use the NA, especially for lower frequency work. But if you really need an NA, this probably isn’t going to be your first choice.

There are network analyzers out there that are affordable although they may have odd frequency ranges. For example, we recently saw one for $150 that won’t go below 137 MHz. It will, however, go up to 2.4 GHz. If you want more background, we had an article on this whole class of devices before. You might also enjoy Digilent’s video on how to set up the network analyzer, below.

Hexabitz, Modular Electronics Made Easier

$
0
0

Over the years there have been a variety of modular electronic systems allowing the creation of complex circuits by the interconnection of modules containing individual functions. Hexabitz, a selection of interlocking polygonal small PCBs, is just such a system. What can it bring to the table that others haven’t done already?

The problem facing designers of modular electronics is this: all devices have different requirements and interfaces. To allow connection between modules that preserves all these connections requires an ever-increasing complexity in the inter-module connectors, or the application of a little intelligence to the problem. The Hexabitz designers have opted for the latter angle, equipping each module with an STM32 microcontroller that allows it to identify both itself and its function, and to establish a mesh network with other modules in the same connected project. This also gives the system the ability to farm off computing tasks to individual modules rather than relying solely upon a single microcontroller or single-board computer.

An extremely comprehensive array of modules can be had for the system, which lends it some interesting possibilities, however, it suffers from the inherent problem of modular electronic systems, that it is less easy to incorporate non-standard functions. If they can crack a prototyping module coupled with an easy way to tell its microcontroller to identify whatever function is upon it, they might have a winner.

Viewing all 4677 articles
Browse latest View live




Latest Images