So the findings here do make sense. For sub 5m cables directly connecting two machines is going to be faster then having some PHY in between that has to resignal. I'm surprised that fiber is only 0.4ns/m worse then their direct copper cables, that is pretty incredible.
What I would actually like to see is how this performs in a more real world situation. Like does this increase line error rates, causing the transport or application to have to resend at a higher rate, which would erase all savings by having lower latency. Also if they are really signaling these in the multi GHz are these passive cables acting like antenna, and having a cabinet full of them just killing itself on crosstalk?
high speed links all have forward error correction now (even PCIe); nothing in my small rack full of 40Gbe devices connected with DACs has any link level errors reported
There's also hollow core fiber, which is pretty close to speed of light in a vacuum. 2.0e8 m/s for fiber, 2.3e8 m/s for copper, pretty close to the full 3e8 m/s for hollow core.
No glass, just some reflective coating on the inside of a waveguide (hollow tube).
Storage over copper used to be sub optimal but not necessarily due to the cable. UDP QUIC is much closer to wire speed. so 10 GB copper and 10 GB fiber are probably the same, but 40+ GB fiber is quite common now.
> So the findings here do make sense. For sub 5m cables directly connecting two machines is going to be faster then having some PHY in between that has to resignal. I'm surprised that fiber is only 0.4ns/m worse then their direct copper cables, that is pretty incredible.
Surely resignaling should be the fixed cost they calculate at about 1ns? Why does it also incur a 0.4ns/m cost?
It's both. Those links try to minimise deviation from the straight link (and invest significant money to get antenna locations to do that), but they also use copper/coax cables for connecting radios as well as hollow core fibre for other connections to the modems.
I misremembered the speed of electrical signal propagation from high school physics. It's around 2/3rds the speed of light in a vacuum not 1/3rd. The speed of light in an optical fibre is also around 2/3rds the speed in a vacuum.
c is the speed of light in a vacuum, but it is not really about light, it is a property of spacetime itself, and light just happens to be carried by a massless particle, which, according to Einstein's equations, make it go at c (when undisturbed by the medium). Gravity also goes at c.
I've always considered C the speed of light and gravity goes at the speed of light, not that light and gravity both go C, which is a property of spacetime. This is a much simpler mental model, thanks for the simple explanation!
You can think of c as the conversion rate between space and time; then, light (and anything else without mass, such as gravity or gluons) travels at a speed of 1. Everything else travels at a speed of less than 1.
(Physicists will in fact use the c=1 convention when keeping track of the distinction between distance units and time units is not important. A related convention is hbar=1.)
You can tell that c is fundamental, rather than just a property of light, from how it appears in the equations for Lorentz boosts (length contraction and time dilation).
Don't know why you were downvoted, this is true. RF energy is carried primarily (solely?) by the dielectric, not the copper itself, simply by virtue of the fact that this is where the E and M fields (and therefore Poynting vector) are nonzero. It's therefore the velocity factor of the dielectric which is relevant.
> I'm surprised that fiber is only 0.4ns/m worse then their direct copper cables
Especially since physics imposes a ~1.67ns/m penalty on fiber. The best-case inverse speed of light in copper is ~3.3ns/m, while it's ~5ns/m in fiber optics.
Aside from that you've got a linear scrambler into balanced drivers into twisted pair. It's about as noise immune as you can get. Unless you put the noise right up next to the cable itself.
The chip has a phy built into it on-die you mean. This affects timing for getting the signal from memory to the phy, but not necessary the switching times of transistors in the phy, nor the timings of turning the light on and off.
"Has lower latency than" fiber. Which is not so shocking. And, yes, technically a valid use of the word "faster" but I think I'm far from the only one who assumed they were going to make a bandwidth claim rather than a latency claim.
Latency to the first byte is one thing, latency to the last byte, quite another. A slow-starting high-throughput connection will bring you the entire payload faster than an instantaneously starting but low-throughput connection. The larger the payload, the more pronounced is the difference.
ehh... latency is an objective term that, for me at least, has always meant something like "how quickly can you turn on a light bulb at the other end of this system"
I think its just because ISPs have engrained in people that "speed" means bandwidth when it comes to the internet. Improving bandwidth is pretty cheap compared to improving latency because the latter requires changing the laws of physics.
If only the bottleneck was the laws of physics. In reality, it's mostly legacy infrastructure, which is of course much harder to change than the laws of physics.
Until pretty recently, throughput dominated the actual human-relevant latency of time-until-action-completes on most connections for most tasks. "Fast" means that your downloads complete quickly, or web pages load quickly, or your e-mail client gets all of your new mail quickly. In the dialup age, just about everything took multiple seconds if not minutes, so the ~200ish ms of latency imposed by the modem didn't really matter. Broadband brought both much greater throughput and much lower latency, and then web pages bloated and you were still waiting for data to finish downloading.
This coming from Arista is unsurprising because their original niche was low-latency, and the first industries that they made in-roads in against the 'incumbents' was finance:
> The low-latency of Arista switches has made them prevalent in high-frequency trading environments, such as the Chicago Board Options Exchange[50] (largest U.S. options exchange) and RBC Capital Markets.[51] As of October 2009, one third of its customers were big Wall Street firms.[52]
They've since expanded into more areas, and are said to be fairly popular with hyper-scalers. Often recommended in forums like /r/networking (support is well-regarded).
One of the co-founders is Andy Bechtolsheim, also a co-founder of Sun, and who wrote Brin and Page one of the earliest cheques to fund Google:
Its not copper that's faster, it's the dielectric in between the twisted pair that has a lower index of refraction.
And, if we neglect how long the signal can travel like the authors do, copper is always going to win this fight vs. fiber because copper can use air as its dielectric but fiber cannot.
IIRC, the passive copper SFP Direct Attach cables are basically just a fancy "crossover cable" (for those old enough to remember those days). Essentially there is no medium conversion.
Its been long known that Direct Attach Copper (DAC's) are faster for short runs. It makes sense since there does not need to be an analog-digital conversion.
I suppose you are right, but we may not say "it has been widely known". Lots of us who read HN come from the the software side and we coders often hand wave on these topics when shooting the breeze -- much like how a casual car enthusiast might not imagine it was possible for a 6-cylinder engine to have more more horsepower than a V8.
For the parent: and not only bottlenecked at single hops but also hampered by the propagation of latency as the hops increase, depending on the complexity of the distributed system design.
High-frequency trading is the primary application, where 5ns can represent millions in profit as firms compete to execute trades first, but you'll also see benefits in distributed database synchronization, real-time financial risk calculations, and some specialized scientific computing workloads.
any high-utilization workload with a chatty protocol dominated by small IOs such as:
* distributed filesystems such as MooseFS, Ceph, Gluster used for hyperconverged infrastructure.
* SANs hosting VMs with busy OLTP databases
* OLTP replication
* CXL memory expansion where remote memory needs to be as close to inter-NUMA node latency as possible
Faster only because the distances involved are short enough that the PHY layer adds significant overhead. But if you somehow could wave a magic wand and make optical computing work, then fiber would be faster (& generate less heat).
> Faster only because the distances involved are short enough that the PHY layer adds significant overhead.
This specifically mentions the 7130 model, which is a specialized bit of kit, and which Arista advertises for (amongst other things):
> Arista's 7130 applications simplify and transform network infrastructure, and are targeted for use cases including ultra-low latency exchange trading, accurate and lossless network visibility, and providing vendor or broker based shared services. They enable a complete lifecycle of packet replication, multiplexing, filtering, timestamping, aggregation and capture.
It is advertised as a "Layer 1" device and has a user-programmable FPGA. Some pre-built applications are: "MetaWatch: Market data & packet capture, Regulatory compliance (MiFID II - RTS 25)", "MetaMux: Market data fan-out and data aggregation for order entry at nanosecond levels", "MultiAccess: Supporting Colo deployments with multiple concurrent exchange connection", "ExchangeApp: Increase exchange fairness, Maintain trade order based on edge timestamps".
Latency matters (and may even be regulated) in some of these use cases.
The PHY contributes only 1ns difference, but the results also show 400ps/m advantage for copper which I can only assume to come from difference in EM propagation speed in the medium.
No. Look at the graph -- the offset when extrapolated back to zero length is the PHY's contribution.
The differing slope of the lines is due to velocity factor in the cable. The speed of light in vacuum is much faster than in other media. And the lines diverge the longer you make them.
It's true, but also if you go look at their product catalog you will see none of their direct attach cables are longer then 5m, and the high bandwidth ones are 2m. So, again, it's true, but also limiting in other ways.
So the findings here do make sense. For sub 5m cables directly connecting two machines is going to be faster then having some PHY in between that has to resignal. I'm surprised that fiber is only 0.4ns/m worse then their direct copper cables, that is pretty incredible.
What I would actually like to see is how this performs in a more real world situation. Like does this increase line error rates, causing the transport or application to have to resend at a higher rate, which would erase all savings by having lower latency. Also if they are really signaling these in the multi GHz are these passive cables acting like antenna, and having a cabinet full of them just killing itself on crosstalk?
high speed links all have forward error correction now (even PCIe); nothing in my small rack full of 40Gbe devices connected with DACs has any link level errors reported
They looked at the medium itself, not the attached data link hardware.
Look at the graphs. The fiber has a higher slope; each meter adds more latency than a meter of copper.
This is simply due to the speed of electromagnetic wave propgation in the different media.
https://networkengineering.stackexchange.com/questions/16438...
Both the propagation of light in fiber and signal propagation in copper are much slower than the speed of lightin vaccuum, but they are not equal.
There's also hollow core fiber, which is pretty close to speed of light in a vacuum. 2.0e8 m/s for fiber, 2.3e8 m/s for copper, pretty close to the full 3e8 m/s for hollow core.
No glass, just some reflective coating on the inside of a waveguide (hollow tube).
https://azure.microsoft.com/en-us/blog/how-hollow-core-fiber...
Storage over copper used to be sub optimal but not necessarily due to the cable. UDP QUIC is much closer to wire speed. so 10 GB copper and 10 GB fiber are probably the same, but 40+ GB fiber is quite common now.
> So the findings here do make sense. For sub 5m cables directly connecting two machines is going to be faster then having some PHY in between that has to resignal. I'm surprised that fiber is only 0.4ns/m worse then their direct copper cables, that is pretty incredible.
Surely resignaling should be the fixed cost they calculate at about 1ns? Why does it also incur a 0.4ns/m cost?
Light speed is ~3ns per metre, so maybe the lowered speed through the fibre?
Speed of electricity in wire should be pretty close to c (at least the front)
Velocity factor in most cables is between 0.6 and 0.8 of what it is in a vacuum. Depends on the dielectric material and cable construction.
This is why point-to-point microwave links took over the HFT market -- they're covering miles with free space, not fiber.
I always thought it was about reduced path length. Interesting.
It's both. Those links try to minimise deviation from the straight link (and invest significant money to get antenna locations to do that), but they also use copper/coax cables for connecting radios as well as hollow core fibre for other connections to the modems.
I misremembered the speed of electrical signal propagation from high school physics. It's around 2/3rds the speed of light in a vacuum not 1/3rd. The speed of light in an optical fibre is also around 2/3rds the speed in a vacuum.
It seems there is quite a wide range for different types of cables so some will be faster and others slower than optical fibre. https://en.wikipedia.org/wiki/Velocity_factor
But the resignalling must surely be unrelated?
> Light speed is ~3ns per metre, so maybe the lowered speed through the fibre?
Obligatory Adm. Grace Hopper nanosecond reference:
* https://www.youtube.com/watch?v=si9iqF5uTFk&t=40m10s
It's c, but not the same c as in air or vacuum. The same applies in optic fibers. They're both around two thirds of the speed of light in vacuum.
c is constant, the speed of light is not.
c is the speed of light in a vacuum, but it is not really about light, it is a property of spacetime itself, and light just happens to be carried by a massless particle, which, according to Einstein's equations, make it go at c (when undisturbed by the medium). Gravity also goes at c.
I've always considered C the speed of light and gravity goes at the speed of light, not that light and gravity both go C, which is a property of spacetime. This is a much simpler mental model, thanks for the simple explanation!
You can think of c as the conversion rate between space and time; then, light (and anything else without mass, such as gravity or gluons) travels at a speed of 1. Everything else travels at a speed of less than 1.
(Physicists will in fact use the c=1 convention when keeping track of the distinction between distance units and time units is not important. A related convention is hbar=1.)
You can tell that c is fundamental, rather than just a property of light, from how it appears in the equations for Lorentz boosts (length contraction and time dilation).
I've always thought of c as the speed limit of causality
c is the speed of light in vacuum.
EM signals move at about 0,66c in fiber, and about 0,98c in copper.
More like 0.6c to 0.75c in Cat6 Ethernet cable.
The insulation slows it down.
Don't know why you were downvoted, this is true. RF energy is carried primarily (solely?) by the dielectric, not the copper itself, simply by virtue of the fact that this is where the E and M fields (and therefore Poynting vector) are nonzero. It's therefore the velocity factor of the dielectric which is relevant.
> I'm surprised that fiber is only 0.4ns/m worse then their direct copper cables
Especially since physics imposes a ~1.67ns/m penalty on fiber. The best-case inverse speed of light in copper is ~3.3ns/m, while it's ~5ns/m in fiber optics.
DACs don't cause problems, but twisted pair at 10Gig is a PITA due to power and thermals
What allows DACs to avoid the power/thermal issues that twisted pair has?
(My naive view is that they're both 'just copper'?)
DACs are usually twin-ax, which is just 2 coax cables bundled. The shielding matters a lot, compared to unshielded twisted pairs.
Faster parallel DACs require more pairs of coax, and thus are thicker and more expensive.
> are these passive cables acting like antenna
With both ends connected to a device? No.
Aside from that you've got a linear scrambler into balanced drivers into twisted pair. It's about as noise immune as you can get. Unless you put the noise right up next to the cable itself.
PHYs are going away and fiber is going straight to the chip now, so while the article is correct, in the near future this will not be the case.
The chip has a phy built into it on-die you mean. This affects timing for getting the signal from memory to the phy, but not necessary the switching times of transistors in the phy, nor the timings of turning the light on and off.
"Has lower latency than" fiber. Which is not so shocking. And, yes, technically a valid use of the word "faster" but I think I'm far from the only one who assumed they were going to make a bandwidth claim rather than a latency claim.
I wonder where does the idea of "fast" beign about throughput comes from. For me it always, always only ever meant latency.
Latency to the first byte is one thing, latency to the last byte, quite another. A slow-starting high-throughput connection will bring you the entire payload faster than an instantaneously starting but low-throughput connection. The larger the payload, the more pronounced is the difference.
ehh... latency is an objective term that, for me at least, has always meant something like "how quickly can you turn on a light bulb at the other end of this system"
Term under discussion is “speed” which goes beyond latency. If you have a low latency but high bandwidth the link is “faster” i.e “time to last byte”
Latency is well defined and nobody is quibbling on that.
An SR-71 Blackbird flies faster than a 747. Nevertheless, a 747 can get 350 people from LA to New York faster than the SR-71.
If I have to download a 4gb movie the roundtrip latency is not so important. With 4MB/s I can get the file in 1000s, with 40MB/s I can get it in 100s
I think its just because ISPs have engrained in people that "speed" means bandwidth when it comes to the internet. Improving bandwidth is pretty cheap compared to improving latency because the latter requires changing the laws of physics.
If only the bottleneck was the laws of physics. In reality, it's mostly legacy infrastructure, which is of course much harder to change than the laws of physics.
A 9600 baud serial connection between two machines in the 90's would have low latency, but few would have called it fast.
Maybe it's all about sufficient bandwidth - now that it's ubiquitous, latency tends to be the dominant concern?
[dead]
Presumably from end users who care about how much time it takes to receive or send some amount of data.
Until pretty recently, throughput dominated the actual human-relevant latency of time-until-action-completes on most connections for most tasks. "Fast" means that your downloads complete quickly, or web pages load quickly, or your e-mail client gets all of your new mail quickly. In the dialup age, just about everything took multiple seconds if not minutes, so the ~200ish ms of latency imposed by the modem didn't really matter. Broadband brought both much greater throughput and much lower latency, and then web pages bloated and you were still waiting for data to finish downloading.
> I wonder where does the idea of "fast" beign about throughput comes from.
A cat video will start displaying much sooner with 1 Mbps of bandwidth compared to 100 Kbps:
> taking a comparatively short time
* https://www.merriam-webster.com/dictionary/fast § 3(a)(2)
> done in comparatively little time; taking a comparatively short time: fast work.
* https://www.dictionary.com/browse/fast § 2
So an online experiences happens sooner (=faster-in-time) with more bandwidth.
I assumed they were going to make a bandwidth claim and was prepared to reject it as nonsense.
Instantly assumed that it was clickbait.
So basically: Lower latency, lower bandwidth?
> So basically: Lower latency, lower bandwidth?
No: DAC and (MMF/SMF) fibre will (in this example) both give you 10Gbps.
This coming from Arista is unsurprising because their original niche was low-latency, and the first industries that they made in-roads in against the 'incumbents' was finance:
> The low-latency of Arista switches has made them prevalent in high-frequency trading environments, such as the Chicago Board Options Exchange[50] (largest U.S. options exchange) and RBC Capital Markets.[51] As of October 2009, one third of its customers were big Wall Street firms.[52]
* https://en.wikipedia.org/wiki/Arista_Networks
They've since expanded into more areas, and are said to be fairly popular with hyper-scalers. Often recommended in forums like /r/networking (support is well-regarded).
One of the co-founders is Andy Bechtolsheim, also a co-founder of Sun, and who wrote Brin and Page one of the earliest cheques to fund Google:
* https://en.wikipedia.org/wiki/Andy_Bechtolsheim
Its not copper that's faster, it's the dielectric in between the twisted pair that has a lower index of refraction.
And, if we neglect how long the signal can travel like the authors do, copper is always going to win this fight vs. fiber because copper can use air as its dielectric but fiber cannot.
This isn't really surprising. Fiber isn't better because of signal propagation speed, it's all about signal integrity.
https://en.wikipedia.org/wiki/Velocity_factor
That and the physical decoupling of information into another medium other than EM
Try running Cat cables on powerlines like Aerial Fibre
Pedantically, light is still EM.
But I think I understand what you mean.
The shape of individual EM waveforms is no longer relevant instead there are just buckets of got some or not.
IIRC, the passive copper SFP Direct Attach cables are basically just a fancy "crossover cable" (for those old enough to remember those days). Essentially there is no medium conversion.
The speed of light is also ever so slightly faster in twinax than in fiber(glass).
Not enough to matter in this comparison, but i thought I should mention it.
Its been long known that Direct Attach Copper (DAC's) are faster for short runs. It makes sense since there does not need to be an analog-digital conversion.
I suppose you are right, but we may not say "it has been widely known". Lots of us who read HN come from the the software side and we coders often hand wave on these topics when shooting the breeze -- much like how a casual car enthusiast might not imagine it was possible for a 6-cylinder engine to have more more horsepower than a V8.
What are applications where 5ns latency improvement is significant?
High Frequency Trading is one.
Anything else? Because that's the only one I can think of.
I'd expect HPC would be another, since a lot of algorithms that run on those clusters are bottlenecked by latency or throughput in communication.
For the parent: and not only bottlenecked at single hops but also hampered by the propagation of latency as the hops increase, depending on the complexity of the distributed system design.
> […] by the propagation of latency as the hops increase […]
Which is why you get network topologies other than 'just' fat tree in HPC networks:
* https://www.hpcwire.com/2019/07/15/super-connecting-the-supe...
* https://en.wikipedia.org/wiki/Torus_interconnect
HPC?
HPC = High-Performance Computing
https://en.wikipedia.org/wiki/High-performance_computing
High-frequency trading is the primary application, where 5ns can represent millions in profit as firms compete to execute trades first, but you'll also see benefits in distributed database synchronization, real-time financial risk calculations, and some specialized scientific computing workloads.
any high-utilization workload with a chatty protocol dominated by small IOs such as: * distributed filesystems such as MooseFS, Ceph, Gluster used for hyperconverged infrastructure. * SANs hosting VMs with busy OLTP databases * OLTP replication * CXL memory expansion where remote memory needs to be as close to inter-NUMA node latency as possible
Should be
Copper is Faster than Fiber in some circumstances.
I wonder how much better hollow core fiber would be. My guess is faster than copper, even given the conversion and retimer latencies.
FEC latency is >> propagation delays at these distances, so that's probably the dominant factor in most cases
Faster only because the distances involved are short enough that the PHY layer adds significant overhead. But if you somehow could wave a magic wand and make optical computing work, then fiber would be faster (& generate less heat).
> Faster only because the distances involved are short enough that the PHY layer adds significant overhead.
This specifically mentions the 7130 model, which is a specialized bit of kit, and which Arista advertises for (amongst other things):
> Arista's 7130 applications simplify and transform network infrastructure, and are targeted for use cases including ultra-low latency exchange trading, accurate and lossless network visibility, and providing vendor or broker based shared services. They enable a complete lifecycle of packet replication, multiplexing, filtering, timestamping, aggregation and capture.
* https://www.arista.com/en/products/7130-applications
It is advertised as a "Layer 1" device and has a user-programmable FPGA. Some pre-built applications are: "MetaWatch: Market data & packet capture, Regulatory compliance (MiFID II - RTS 25)", "MetaMux: Market data fan-out and data aggregation for order entry at nanosecond levels", "MultiAccess: Supporting Colo deployments with multiple concurrent exchange connection", "ExchangeApp: Increase exchange fairness, Maintain trade order based on edge timestamps".
Latency matters (and may even be regulated) in some of these use cases.
The PHY contributes only 1ns difference, but the results also show 400ps/m advantage for copper which I can only assume to come from difference in EM propagation speed in the medium.
No. Look at the graph -- the offset when extrapolated back to zero length is the PHY's contribution.
The differing slope of the lines is due to velocity factor in the cable. The speed of light in vacuum is much faster than in other media. And the lines diverge the longer you make them.
It's true, but also if you go look at their product catalog you will see none of their direct attach cables are longer then 5m, and the high bandwidth ones are 2m. So, again, it's true, but also limiting in other ways.
Hollow-core microstructured optical fibers (HC-MOFs) promise propagation speeds nearly that of light in vacuum.
https://pmc.ncbi.nlm.nih.gov/articles/PMC11548225/
Now do silver.
[dead]