Immigrants are a first target of rapidly expanding government surveillance, EFF's Matthew Guariglia told Truthout, but βif they can get away with doing these things to immigrants, what are the legal barriers from moving on to the next undesirable group?β
https://truthout.org/articles/trump-is-rapidly-expanding-the-surveillance-state-as-protests-grow/
@eff @theruran @50htz Here you go dear friends. It was obvious this day, this situation, would come.
Now.
In my humble opinion, the best path to resist in the current cyberspace architecture are PUF. Did you know several kinds of PUF can be instantiated with several very different technics in all FPGAs ? Allowing to auth securely an FPGA, its PUF, and it's configuration, almost as if we had end-user verifiable free integrated circuits SoC with PUF.
The only thing I have to check is that is a PUF instanciated i to an FPGA can also validate (sign/hash) the running bitfile. If we can do it, provenly safely, and I think this should be the case, it's really far going in terms of consequences to create private authenticated mesh networks of SoC whose content can be trusted remotely.
This is enough for the come back and a full rebirth of truly autonomous P2P networks, cloud attacks safe.
@eff @theruran @50htz Think about the consequences if such FPGA SoC with PUF auth is using an hardstack module for external connectivity.
Finaly, after all, the situation is not as bad as we thought.
But this is true only if a PUF implemented into an FPGA can also "validate remotely" the running bitfile.
@eff @theruran @50htz cc The small thread above @kkarhan @vidak @forthy42 A simple Spartan 6 can reuse the pins used for its SPI interface pins used to load the bitfile from an external EEPROM at FPGA boot time as standard IOBs, allowing the design implemented in the bitfile to re-access again the bitfile to perform computations on itself by implementing another SPI controller and computing a hash etc...
=
@eff @theruran @50htz @kkarhan @vidak @forthy42
An FPGA running bitfile can perform computations on itself. If those coΓ±putations, including bitfile's hash, hash signature validation, also use PUF capabilities to derive a unique signing key, then we have a running bitfile remotely authenticable.
Isn't life great when crypto-anarchists' motivation goed wild ?
Hardstack usage even bring more with this.
@kkarhan See Kevin, most folks don't get it that when hardware is authenticated and validated against original source code, then and only then, true free software (end-user verifiable) gives back all its powers to its users and becomes finaly trustable, unleaching all its power of transparency for real. @eff @theruran @50htz @vidak @forthy42
@kkarhan Indeed the kind of application targeted are either assembler written P2P node, voluntarily avoiding compilers for more security and end-user strict verification, or typicaly a linux to do the kind of nodes you know, making Tor obsolete.
For the first kind of application, many hackers would feel in peace it'd be 68k ISA,
@kkarhan And for the second kind of nodes, it doesn't really matter, and I think it would be a simple version of RISC-V.
Reinventing a new ISA would be counter productive and time lost. We can't afford loosing time any longer. The fascists and imperialists can't win.
@kkarhan Do you know why nobody implemented this before Kevin ? Because everybody is under CIA/NSA/MOSSAD mind control weaponery in the west, and researching or developping such things is not included into the "white list" of authorised thoughts by those nazi imperialist pigs.
@stman @theruran @50htz @vidak @forthy42 OFC I don't expect you to reinvent the wheel!
- Having a minimalist Linux distro being supported is kinda necessary "evil" as in *"Software makes Operating Systems, Operating Systems make Hardware" and if we want this to suceed we need to ease-in developers to adopt it.
It's not that I deny your developments, but having a way for anyone to follow a guide and have "blinky cursor on screen" really does deliver them dopamine to want to dive deeper...
@kkarhan A last word about the covert fascism of these 3 letters agencies and all their western allies : Fascism, including "German nazism", could happen only because people used to be brainwashed in a way reducing their ability to think freely, and whatever they would think after the brainwashing, would lead them to agree with nazism or think like them.
These 3 letters & friends did the same with digital technologies & internet as cyber-religions.
And now a last word on the topic of FPGA PUF SoC : Actually, the only "bad thing" to take care of are FPGA backdoors, allowing remote running configuration of the FPGA live modifications. But this requires fast side-channel or hidden channels to operate. Hardstack will mitigate most of them based on TCP/IP, at least all the fast and known ones, so using these hidden-channels to use FPGA backdoors become less practical, if not impossible.
Then we know plenty of tricks to implement to mitigate these backdoor usage even better : Here I'm talking about simple tricks like resetting & reloading an FPGA bitfile very regularly, like every hour.
This strategy is good as long as we also block securely all fast hidden channels thanks to the usage of modules like hardstack :
An attacker controlling those backdoors therefore doesn't have enough time with slow hidden channels to modify
the running FPGA configuration in the time laps between two FPGA resets & bitfile reloads.
Blocking fast side/hidden channels + reloading the FPGA bitfile regularly over time is really a very strong combination that prevents most of FPGA backdoor usage.
If you add a third trick consisting in reloading a different bitfile (same code but compiled with a different seed/salt), then we can consider such backdoor usage fully mitigated.
But the last trick is harder to implement, and even with the two first ones, we already reach a very low (And acceptable) probability that the attacker can use its FPGA backdoors.
As a conclusion we can say FPGA PUF+SoC design here gives us almost true equivalence to end-users verifiables free integrated circuits PUF SoC.
We thought we were stuck, because of the unavailability of end-users verifiables free integrated circuits mass production & verification ecosystems like those I wanted to present at FSiC 2024 (Talk censored by those motherfucking 3 letters), but actually, we found a workaround, a good one, that require much less work than our original plan.
We've been lucky.
Now we must take advantage of that. And fast, way before we're all sent into gaz chambers.
I mean, flatlined like Mc Affee.
This will be a good revenge for the covert murder (Nobody can have any doubt Mc Affee was murdered) of the escaped slave Mc Affee. He must be laughting from where he is if he can see our determination and the results we get.
We still manage to think creatively, even completly fucked and blocked and slowed down by the psychotronic weaponery & DEWs heavily used on all of us to FUCK US.
By the way Kevin, the FPGA PUF+SoC also work to validate any custom hardcoded engine made with the FPGA too (Peripherals, whatever ...). It's actually the "Remote IC Trust" fonctionnality we have been dreaming about with my prior proposal of "End-users verifiable free integrated circuits", but this time, made out of non-free FPGA.
Remote IC Trust with PUF is clearly the next big Crypto-Anarchist move.
And we can't be stopped. π
YES Kevin.
β’ FGPA made SoC with PUF means remote nodes' SoCs can be trusted by a given node to be conform to a given hardware and software code (Usefull to ensure equality of implementation of both hardware and software code between nodes of a P2P private network).
β’ Using standalone external (hardcoded in TTL) or internal (hardcoded in FPGA fabric) modules like
#Hardstack make each node way more secure, preventing known hidden/side channels over TCP/IP and undesired outgoing or incoming connections from bad actors.
β’ Ultimately, using a PUF protects such P2P networks against cloud attacks (Existing legit nodes CANNOT be cloned or duplicated or spoofed into a cloud, and millions of fake node cannot be instanciated in a
@kkarhan snap into a cloud, which how the CIA pwnd at least bitcoin).
PUF is the best cryptographic primitive to fight back against the bastards. But PUF are exclusively a hardware only cryptographic primitive. It cannot be implemented safely in software. It's an hardware thing.
CIA & friends or competitors are celebrating every single day that goes we don't move our asses
@kkarhan to develop a new generation of P2P networks EXCLUSIVELY based on such principles described above.
They drink champagne everyday.
They must even be betting internaly weither god damn will we dare doing it or not !
We can fight back, just if we simply want to.
WAKE UP FRIENDS !
Let's move our asses !
We can kick their balls to mars.
Let's just do it !
@kkarhan @theruran @50htz @vidak @forthy42
β’ One simple low cost FPGA board.
β’ One simple single bin file to download, check hash and/or signature, and flash it in an EEPROM or EPROM or ROM (The board can come with it already flashed).
β’ Find other bros nearby who bought it too and are using it.
β’ Meet them IRL in a meetup to exchange PUF challenges to include new user nodes into the network.
Then have fun !
Web of trust have limited draw backs. But with mutual PUF based verification of each node's running hardware and software, it's way safer. If a node plays bad, it can be excluded easily and definitively safely. If the mesh network incclude a topology broadcast to all nodes, we can easily identify folks responsible for letting bad nodes coming it too often.
There is a true collective maintenance of authorized nodes possible.
Until now, without PUF, true remote IC & running software mutual check was not possible. But with this new scheme, it is possible, and it changes things deeply.
Am I talking into the void ?
What's going on friends ?
Where is that joy you should feel ?
Are you happy ?
@kkarhan @theruran @50htz @vidak @forthy42
If we don't dedicate more time, collectively, to move forward with the different development and projects, sharing the workload cleverly, between all of us, we running toward disaster.
We should first agree on what project to focus on. I thought developing a general purpose secure P2P node, usable for any P2P app, would be consensual between us, but we need to validate this clearly so that everybody can get engaged in its development.
@kkarhan @theruran @50htz @vidak @forthy42
It appears to me that we shall slow down on our fundamental research (Which are at stake anyway, so it won't change much), and focus more on a real and "usable now" project.
@50htz Chris, do you have finished installing your Xilinx FPGA dev environment you got with your FPGA dev card ? Do you need my help to put all this operational ? It's easy for me to guide you, I'm well used to these ISE tools from Xilinx.
@kkarhan @theruran @50htz @vidak @forthy42
So currently, I'm still developing Hardstack, but hardstack is not enough, we need the rest of the hardware to create a usable general purpose P2P super secure node hardware from FPGA, with a PUF, if you agree to this project. We shall discuss this. What will such work lead us to once it's finished (Discuss the first P2P apps we would like to implement with it).
Please answer, have your voice and opinion expressed. I want to know how you see things.
@kkarhan @theruran @50htz @vidak @forthy42
Basically, we need to develop a well choosen PUF into a low cost FPGA, then develop the functions that will allow remote and local authentication and integrity check of both everything implemented in hardware into the FPGA (That can include a CPU, some peripherals), but also the software, if any, implemented and running on such soft CPU made out of the FPGA : The verification logic would be agnotic of the fact something is hardware or software, as it
@kkarhan @theruran @50htz @vidak @forthy42
will all be included into the FPGA bitfile, so this means we only have one binary blob to check against an original one serving as reference.
The approach of securing a system with PUF has a constraint, which is also its best quality : Everything must fit into a single standalone IC, the FPGA (The CPU, the peripherals, the custom business hardcoded logic if any, the PUF, and these integrity checking functions, and the software executable code).
@kkarhan @theruran @50htz @vidak @forthy42
This creates limitation according to the FPGA capabilities, RAM available, etc...
But the reward is that we can have a design that can be remotely checked in terms of integrity, and this is essential in the kind of P2P networks we are targetting. Everything that is implemented in the FPGA can benefit remote integrity check mechanism. Everything external to the FPGA can't, because these external things would need to implement some other PUF.
@kkarhan @theruran @50htz @vidak @forthy42
A PUF can only validate one single IC where it is implemented.
That's the limitation. But it's not a bad one. It's our force. And it will also force coding the "business logic" of a P2P node with limited ressources in terms of code size, and RAM available, which almost naturally forces the P2P app logic be developped in assembler, or with a light C compiler with only a few basic libraries, which is a good thing, actually, in terms of security and
@kkarhan @theruran @50htz @vidak @forthy42
review by peers : The "put it all in the FPGA constraint" is actually a chance, it will force perfectly coded apps, the good old way. It naturally forces us to reduce complexity as much as possible, and avoid all the problems of Linux dev with too many dependences to librairies etc... making full review impossible if taking all these dependences into consideration.
But I'd like yo hear your point of view or take on all this.
@kkarhan @theruran @50htz @vidak @forthy42
Chris and Vidak wanted our group to work more on operational stuff, permacomputing way of doing things.
Here we are. Due to the political situation, we are actually forced to follow this path, and consider it an emergency.
I hope Vidak can rally many other motivated people to this thread. We need a few more hands and brains.
If we don't do it now, we will never do it, or it will be too late. It's our moral responsability to act now.
- replies
- 1
- announces
- 1
- likes
- 2
@vidak Yes Comrade.
Look, I don't work for a shit job for the moment, I'm still blocked at my parents' place until my father has fully recovered (Which is almost the case, at 90% at least). So I have some free time. I use this time to work on my RF Pulse detector (90% of my free time), and 10% on hardstack.
I can't really take more in charge "alone". I'm doing a lot already. But could still work with others on other topics, but NOT ALONE.
@vidak I put the priority on the RF pulse detector and analyzzr because I can't continue living being tortured daily by anonymous nazis bastards. I need to know exactly who the fuck is doing this to me, and I'm not the only victim. Many other victims count on me to finish this god damned detector ASAP.
Still, the skills acquired to develop it will be very useful in the future for all my digital designs too, like versions of hardstack running at 100 mbps.
I've been learning a lot, really a lot, in HF / RF analog electronics. Ihad poor skills, and now I really feel more fluent.
This could be also reused to develop a fast version of "Ronja", you know, the opto-electronic modem (Point to point, 2 km, optical modem, like with laser but actually with LED and a simple optic).... Do you remember Ronja's ? I used to present them to you in the past I think (I hope).
Ronjas are typical permacomputing toyz. But they need to be push up to 100 mbps.
And at 100 mbps, ethernet is completely different, it is no more a simple digital manchester encoding "on the cable", it's a transmission based on "symbols" encoding several bits (5 bits in the case of 100BASE-T). And here, there are two approaches : ADC sampling, at least at 200 Msps then, or performing Flash ADC using high speed comparator to determine simply a symbol
and in this regard, my work on my RF pulse detector will have been a super training class for Flash ADC, as I am doing Flash ADC. Flash ADC doesn't use a classical ADC circuit, it uses another technic with fast comparators.
It's way more simple, much less costy, than classical ADC. It's really more "permacomputing friendly" as it doesn't use costy complex components as a true ADC IC. It can be made out of discret transistors. Which is a good point
@vidak @50htz @kkarhan @theruran @forthy42
Finishing my point on Ronjas at 100 Mbps and flash ADC : If I can remember well, those 100BASE-T symbols are something like : For each symbol duration, each symbol having always the same duration, the symbol duration is divided into two sub-periods, and for each sub-period, the expected value of the signal can be +1, 0, or -1.
Examples :
Symbol #1 : +1, then 0
@vidak @50htz @kkarhan @theruran @forthy42
Symbol #2: -1, then +1
Symbol #3: -1, then 0
These were just examples to illustrate that fir a 100 Mbps Ronja implemented in a "permacomputing way", it is stupid to do ADC to recognize symbols, because it is complex, while using 3 fast comparators can easily recognize the +1, 0 or -1 levels.
Then for the transmitting Ronja, we can use two different optical channels
@vidak @50htz @kkarhan @theruran @forthy42
β’ Red LED color for the channel #1
β’ Blue LED color for the channel #2
And then direct "+1" ethernet symbol level emission on the RED led, and "-1" ethernet symbol level on the BLUE led. When "0" ethernet symbol level, both leds are off.
This spectral multiplexing is very easy to do this way, with Flash ADC technics and actually just two fast comparators to detect
@vidak @50htz @kkarhan @theruran @forthy42
"+1" and "-1" ethernet symbol levels. "0" being neither "1" comparator nor "-1" comparators detecting theur respective ethernet levels.
The spectral optical multiplexing of the ethernet signal using two different colors, having two distinct channels emitting in parallel allows to transfert the "3 different levels" of each symbol's two periods.
It's simple.
@vidak @50htz @kkarhan @theruran @forthy42
Receiving is even more simple : we just need to place optical color filters, one blue, one red, in front of the two photogransistors receiving the double color led beam of the remote ronja.
We don't even need an FPGA for such implementation, nor we need a CPU to perform signal processing of the byte stream of ADC sampling the ethernet signal at least at 200 Msps.
@vidak @50htz @kkarhan @theruran @forthy42
Conclusion : From a Ronja at 100 Mbps made with FPGA, CPU and fast 8 bit ADC, costing like 200 euros, to the Flash ADC implementation I detailed, doing the equivalent of the FPGA+CPU+Fast ADC 8 bits, but for 20β¬, with a NULL attack surface on the Ronja itself because "processor less, software less".
What I described is really permacomputing way of doing things : KISS
@vidak @50htz @kkarhan @theruran @forthy42
Just a year ago, I was ignoring how to create a 100 Mbps Ronja transmitter for less than 200β¬, with a medium complexity with FPGA, CPU and ADC 8 bits.
Today, I know how to develop it simply, processor & FPGA & ADC less, just for 20β¬ with a Flash ADC very simple implementation.
The drop in the price is making it very desirable. We can do it.
@stman as I suggested in the past, it would be feasible to easily get 10x (if not beyond) by frequency multiplexing (aka. different wavelengths) said RONJA units, alongside using narrow bandpass filters (or narrowband photodetectors) for each wavelenght...
- Basically doing passive CWDM at home...
https://en.wikipedia.org/wiki/Wavelength-division_multiplexing#Coarse_WDM
@kkarhan Yes.
Using four channels would allow to reduce even more the necessary bandwidth both for the Leds, and for the photo-transistors. The only question is "is it possible to reproducibly find good enough color optical filters" available everywhere at low cost". I convinced it is, but one of us would just need to focus on this point and test several "candidates".
@kkarhan The grace of the original 10 Mbps Ronja is that it was fully reproducible with low cost and highly available discrete components worldwide.
I now think the same challenge can be meet for an upgraded version operating at 100 Mbps, a d still operationnal up to a maximum distance of at least 2 km, like the original #ronja
I love the fact it can be safe because
@stman @vidak @50htz @theruran @forthy42 granted, this isn't rocket science as both high-quality, rugged (industrial grade temperature range), longlife LEDs do exist and both colour-filters for input and output that are colour-stable yet affordable with low light loss are existing on the market.
If we were to look at COTS parts, we shurely should look at LEDs and Lasers in common wavelenghts. These are the ones where one can find on the market:
940 nm (medium-IR, modern NV)
850 nm (near-IR, old NV)
780 nm (near-IR; CDs)
670 nm (red, HiVision Laserdisc)
650 nm (red, DVDs)
638 nm (orange)
532 nm (green)
510 nm (bright-green)
488 nm (blue)
405 nm (blue-violet, HD-DVD & BDs)
385 nm (violet, *"black light"*)
365 nm (UV, document verificators)
Assuming those get only monomode
each and we're going with 10/12 error correction that means we got 10x the available spectrum and 10x the bandwidth of the RONJA at the same baud rate.
OFC I'd assume for cost reasons it would rather be a a "RGB+" setup with a potential "multimode
" setup to reduce costs for parts whist achieving the same throughput at lower baud rate.
940 nm (IR)
650 nm (red)
532 nm (green)
405 nm (blue-violet)
365 nm (UV)
In the end, component costs and -limitations are what limit the "visible light PtP links" performance...
- OFC getting one of those cheap 500mm/800mm f/8 optics and 2x teleconverters to zoom onto the other end as well as recessing it into a case (similar to a CCTV camera or rather telescope) could also yield better results and reduce issues with light bleed and disruptions even when installed at suboptimal angles with "noise sources" (ambient light pollution, sun) disturbing.
In the end this could even adapt the link width automatically and even renegotiate it with any switch if done properly, as downgrading to 10 Mbit/s is preferable to total link loss...
In fact, companies like TESAT already do this for their optical intersatellite link systems, delivering up to 100Gbit/s over 80.000km at 340W power draw and 34kg weight.
So I'm shure 1/100th in terms of speed, power consumption and 1/10.000th of lenght and 1/4 of weight is achieveable with COTS parts.
@stman @vidak @50htz @theruran @forthy42 I know - they can't be turned on/off at indefinite speeds, nor can photodetectors detect at infinite sample rates.
- That's why one has to OFC choose components carefully and consider both "wavelenght mulitplexing" as well as multimode strategies...
AFAICT, #RONJA uses a single brake light and photo detector, so it's both monomode and without any WDM at all...
- In theory one could also design it to be a more scalable system with very precise aiming for 1:1 relations [using 2000mm equivalent optics to aim right at the senders)....
This would be more finnicky to install, but on most occasions should be on par with Ku-Band, at worst Ka-Band satellite dishes...
@kkarhan Yes.
If the 100 Mbps #ronja we could develop, manages to keep a simple design with discrete components and simple logic gates is important, because it means a "microprocessor less" implementation for the Modem, which is a good point if we can keep it that way in terms of resilience, cybersecurity and NULL software & hardware attack surface. The modem shall stay a modem. FEC things or equivalent are almost out of scope, because involving complex DSP.
@stman @vidak @50htz @theruran @forthy42 in that case I'd propose to couple multiple wavelenght pair units together with basically do a "space division multiplex" by having the endpoints aim exactly onto each other...
Add to that different wavelenghts and you can get even more bandwith at the expense of a bulkier unit, but with multiple units that can then aggregate bandwith.
I.e. having a switch that does LACP and can then on-the-fly add/remove/enable/disable single RONJA links would also work.
At worst one can do RGB lasers + optical beamsplitters at the recieving end with bandpass / colour filters and combine such a "tri-wave unit" and put them into clusters of like 6 which under optimum circumstances would yield 18x the bandwith without necessitating a higher baud rate.
- OFC we've tovassume this is optimal coditions and thus never feasibl so reducing the baud rate by 40% would still yield 108MBit/s (which is also known as "Super-G" / 802.11g speed), allowing for 8% error margin.
Said system could also use some of the bandwith to auto-negotiate link rates and adapt them on the fly, reducing workload of operators and being able to automatically downgrade in case of excessive rain and fog rather than loosing connectivity.
- With the added benefit of better serviceability should a unit get damaged or smudged or something...
@kkarhan Yep.
This is the idea.
Let me give you the a of LEDS (And their respective inner technology) and phototransistors (Or equivalent technology) that can switch very fast and at least above 100 MHz (Search done by an AI), there are not a lot actually :
(And we shall check the price, to have an order).
In such Ronja, those LEDS and photodetectors would be actually the only "critical" components, with not many substitutes possible.
@kkarhan A tually the search done by Chat GPT above is very incomplete. There are much more options available. The issue is the junction capacitance that is too high for high frequency CW, or you need to improve the driver to compensate the capacitance with a pulse having a "peak and sustain" wave shape.
I'm reading a paper explaining how to do that and increase many LEDs
max CW bandwidth just by improving the Led driver.
Polarization is a good idea, as polaroid filters are inexpensive.
There is nothing possible here, we just need to create configurable drivers and test different Leds with it. Some folks also use another trick consisting of pre driving the Led with a small current to have it just "under" the begining of its linear
@stman @vidak @50htz @theruran @forthy42 Didn't knew LEDs could be driven at such a high speed, but then again given RONJAs run with a single powerful LED used as a brake light on motorcycles, it shure as hell needs to be a monomode, single wavelenght system.
- They propably even go so far as to have the logical 0 to not correspond with an electrical 0V/0lm/0cd output, but just a 'low' state and a 'high' state (i.e. flipping between 5V & 12V or 3,3V & 5V) because it has a lower rise time (and may also be less straining to the LED given that it's lifetime is measured in hours and power cycles, and avoiding poweroffs.
This also helps attenuating signals whilst setting up and allows for on-the-fly reattenuation to deal with weather as well as detecting mere signal degradation (and thus communicating this to the other side) and loss of line of sight
...
@stman @vidak @50htz @theruran @forthy42 I'd still recommend to at least consider integrating more wavelenghts (even if it's just green and blue next to red) as green and blue have better transmissability and are less likely to get attenuated by humidity and rain.
- Plus you'd have more wiggle room re: symbol rate and could potentially make this a scalable option as in "replaceable modules" that allow in-the-field reconfiguration and potentially even DIY fiber connections using COTS lasers and cheap TOSlink cables.
@stman @vidak @50htz @theruran @forthy42 as for "protocol" I'd recommend to take a closer look at #IrDA since there are a shitton of cheap transcievers and implementations available and it would allow this to be reused for a secure P2P data exchange and secure contactless networking solution that is inherently harder (if not basically impossible) to eavesdrop on compared to #Bluetooth and #WiFi.
- Plus it'll be a potentially better alternative to #LiFi for #Broadcast useage.
Not to mention I've not seen any IrDA devices >4MBit/s in the wild, with most being 9600bit/s #serial links fir the most part, and having a direct optical data exchange that takes literal seconds instead of minutes would really have a lot of good use cases i.e. in medical fields, where having a fully sterilizeable computer is kibda important and having a docking cradle with a charging coil and optical "port" would really be appreciated.
In fact ITU G.9991 (G.vlc) & IEEE 802.11bb are worth looking into as well just to see advantages and disadvantages...
https://en.wikipedia.org/wiki/IrDA
https://en.mikipedia.org/wiki/Li-Fi
@kkarhan Sure we can find some interesting tricks with IrDA, but don't loose sight than IrDA is for short distances, while for #ronja we want to achieve at least 2 Km, meaning we use powerful LEDs in comparison with IrDA.
---
Here is a link on my Mega account to download the FULL 802.3-2018 Standard with all sections. It's normaly hard to find this file, and it costs a lot if you want to buy it.
https://mega.nz/file/UscgELSJ#v7EZZB-pXz7Eijob72ilMUsc49J93WPREnR2KvDTtEk
I will refer pages in it later on...
@stman @vidak @50htz @theruran @forthy42 BTW: 930-950nm IR also can help detect water vapour, fog and rain that may attenuate the signal, so propably worth using that frequency as well to allow for adaptive correction of baud rate to account for weather-based signal degradation...
- It could also be used as a pilot signal to fine-adjust the link...
Siemens claimed 500 MBit/s with visible light LEDs...
@stman @vidak @50htz @theruran @forthy42 I know.
IrDA is also a simplex protocol and not full duplex.
Still, a lot of the design lessions (espechally NZRI encoding) are kinda vital.
And whilst IrDA was designed for device-device links.of short distance, nothing in the protocol says that given sufficient SNR and ability to recieve you can't use it over longer distances.
- Basically leveraging it as a means for "auto-negotiation" and transmitting useful configuration, like Pubkeys and facilitating the key exchange for transparent data encryption between the two points (if not pair them that way initially before installing them)...
Including like basic "link acquisition" to allow for faster and easier setup of said links.
- Also you can leverage the protocol for slow, abeit ultra-long range links (i.e. using UV or blue visible light)...
@stman @vidak @50htz @theruran @forthy42 granted, I don't expect you to implement that stuff, but merely see it as reference material to cross check ideas with, because they may have already tried something and even wrote down why something is to be prefered or avoided at all costs.
- Basically see it as a sort of "grass touchpoint" to compare against.
Obviously you aim for something simpler and more effective, akin to an optical version of "RS-485 on speed" where you can easily put some simple FPGA doing DSP work and shoving 10/100M Ethernet through a NIC.
Normaly, as far as I can remember (Because god damned, it's 30 minutes I'm looking into the 802.3 standard to find the fucking 100BASE-T encoding on the twistted pairs, where those symbols are defined, and I can't find them yet, it's just crazy), ethernet 100BASE-T is already NRZI encoded.
Okay, got them.
The shit start at page 185 (Chapter 24.2.2.1) where the 4B/5B encoding is detailed, and then we should have the corresponding waveforms defined for each symbol.
It took me 30 minutes to find the shit. 30 minutes. And understand I had located it before in the past, but you see, the shitty standard is so big that even those like me who have spent already days reading it are still lost. God damned.
God damned thing.
@stman @vidak @50htz @theruran @forthy42 sorry, didn't have that in the back of my head...
Either way, I think a visible light wavelenght multiplex if not "multimode" or spacial multiplex will allow you to easily >3x the bandwith just with RGB-LEDs alone, no change in symbol rate required.
Same principle as with fiber ethernet: If you can't increase the bandwith per frequency, just add another one.
@stman @vidak @50htz @theruran @forthy42 4b/5b could easily allow for a simple FDM/Multimode/code thingy.
After all, it's literally just 2β΄=16 combinations and the 5th bit being basically RZI'd FEC 4/5...
Assuming a use of 5 wavelenghts that means one only needs to double the symbol rate to get the 10x soeed gains for 100MBit/s over 10MBit/s.
@kkarhan No, no, it's not that.
It has nothing to do with FEC and NRZI, it's just that each byte is transmitted with two nibbles of 4 bits, but each nibble is first converted to 5 bits symbols. Some symbols are used to delimit the start and end of the frame... etc...
But I'd like to see the MLT-3 encoding defining each of the 2^5 bits (32 symbols) symbols. I can't find it yet.
God damned fucking shitty standard with militarized complexity.
Okay found it :
- Encoder : Chapter 25.5.1
- Decoder : Chapter 25.5.2
But the bastards only indicate changes to another chapter indicating details on MLT-3 encoding (Which is a 3 level variant of NRZI encoding) defined in chapters 7.1.2 and 7.2.2 that shall stand as normative reference for MLT-3 encoding/decoding for 100BASE-T , plus the additions/changes found in chapters 25.5.1 and 25.5.2.
Pfffff.
Going to chapters 7.2.1 and 7.2.2 then...
Well, I couldn't find it, it refers to other standards.
But I did "google" MLT-3 and got this image :
So I can confirm we have a 3 level encoding scheme.
I wanted to use 2 channels, to only transfers "+1" and "-1", implcitely transfering "0" when both channels are to ZERO.
@kkarhan Yes sure. But a 100 Mpbs ronja must stay as KISS as possible. I want to avoid CPU/FPGA and ADC/DAC if possible. And converting MLT-3 encoded signal to two optical channels with Leds of different colors can be done simply without CPU/FPGA/ADC+DAC and digital signal processing if we implement simple Flash ADC technics I described yesterday.
I prefer spending time selecting good Leds and creating an ad-hoc driver to boost their bandwidth, and using Flash ADC technics to detect the 3 possible levels of an MLT-3 signal just with simple discret logic and fast comparators, where no CPU signal pricessing is required, than using a CPU with DAC/ADC.
Simplicity is always better.
@kkarhan This means implementing a true optical modem, with lo CPU and no software.
Even NSA or CIA or MOSSAD couldn't "hack" these Modems, because they would have a proven NULL software and hardware attack surface.
It's no coincidence anarchists choose to implement #ronja 10 Mbps this way.
We want true independence & freedom.
We have no free integrated circuits yet, so
when we have the opportunity to create devices as strong as if we had end-user verifiable free integrated circuits, we shall really take advantage of it, even if it means less functionnalities (Typically no FEC, in this specific case).
Achieving simply true & proven full security is rare we shall not waste it.
But have a look at what I could find in a publication about MLT-3 encoding for 100BASE-T ethernet, this is very good news :
Most high power LEDs can modulate up to 50 MHz. Maybe using two channels to transmit separately "+V" and "-V" levels is not necessary.
Look at my remark highlighted in Red on this pic below :
@kkarhan @vidak @50htz @theruran @forthy42
Actually all this will depend weither we choose to use digital CW modulation (Digital 1 or 0) on two optical channels, one for "+V" and "-V" levels, or if we choose to emit the signal line in an analog way, using a single channel, knowing in such case the signal frequency is around 32 MHz only, with most LEDs able to handle up to 50 MHz bandwidth.
The 2 emission methods (Fully digital with 2 chans vs analog 1 with chan) must be tested on the field.
@stman @vidak @50htz @theruran @forthy42 agreed.
Personally, I'd recommend a tri-band approach (605nm Red, 523nm Green, 405nm Blue) and basically treat them as independent carriers, so you can get away with a lower symbol rate & baud rate or have more margin.
- Remember: The optical Line-of-Sight link may not always have ideal conditions and face anything from fog to hailstorms and from clear skies to sunshine glaring into the sensor with rain diffracting it.
I wish I knew a source for eye-safe laser pointers and matching photo detectors & colour filters cuz that would really make it easier to build such a prototype.
If you only want two then consider 605nm + 405nm cuz those are cheap to source LEDs by virtue of being the same wavelenghts as DVD & blu-ray players respectably...
@stman @vidak @50htz @theruran @forthy42 whilst this is not OFDM as in Li-Fi the FDM part could really increase signal integrity and speed uder adverse conditions (i.e. urban area with light pollution and/or links that have units face the sun at least once a day.
- We just don't have subcarriers but modulate to narrow optical frequencies directly.
@kkarhan I'm not closed to what you claim at all.
Neither I am closed to laser usage.
The problem if we use lasers, we will have for sure law enforcement immediately barking at us saying we're endangering national security with planes that could be disturbed/attacked with our lasers, or what ever similar fake pretext. They'll charge us with anything they can. You know it.
@kkarhan You now have most of the key technological & contextual informations involved in the opportunity of developping 100 mbps #ronja like devices.
IMHO, what is now needed to make clever educated choices are more field and lab tests of :
β’ The "peak + sustain" LED current ad-hoc driver trick that can boost any high power LED's bandwith (10 ~> 50 MHz) by an X10 factor.
β’ The field/lab test of hundreds of different LEDs to determine their maximum bandwidth, as those key data are almost NEVER mentionned in their respective datasheet (Which a shame by the way, as if there way a conspiracy around hidding this info to slow down any high speed #ronja development).
Then, we should better evaluate how an MLT-3 encoded analog signal can
be multiplexed onto different channels allowing bandwidth reduction for each channels and improving signal integrity when the device being used in bad meteorological conditions or highly degraded environment.
β’ We shall also evaluate ON THE FIELD all these different multiple channels technics you presented.
= What we need now are development boards prototypes.
@stman @vidak @50htz @theruran @forthy42
Granted, I'd never say you should use lasers.
Simply choose LEDs in said spectrum and that should be fine.
- Remember, we ain't blasting Class 4 Laser beams at airliners, and if we were to use any lasers, we'd definitely have to use "eye-safe" Class 0 or similar.
Granted the design would most likely look like a traffic light (with a cone akin to a toilet paper roll around it to enshure the inner center is only illuminated by the LEDs) or a big-ass outdoor surveillance camera (cuz there are mass-produced chassis for even big ones, reducing cost when it comes to an IP67 = weatherproof chassis)
@stman @vidak @50htz @theruran @forthy42 ...this also includes looking into datasheets of LEDs to see their endurance and specs.
Potentially you'd need a heatpipe'd cooler and external cooling fins to radiate away heat once you get to some 1-10+ W LEDs.
- Also consider that you'd potentially be better off with a "multimode" configuration using multiple LEDs in rings or sectors.
@stman @vidak @50htz @theruran @forthy42 I don't think #RONJA's development was actively sabotaged.
- I just assume that #WiFi over longer distances just got better and cheaper, longer range and faster #COTS products that due to lowered regulation and certified conformity didn't require any costly permits and frequency allocations compared to licensed spectrum links.
In fact, @BNetzA did extend frequency allocations for #BFWA useage and dedicaded the 5,8 GHz band to to #WISP|s that want to provide #broadband in rural areas.
- So it's not the fault of RONJA or it's devs, but the fault of WiFi and proprietary solutions to have caught up.
Personally, I do wish for #OpticalLinks more because they are not just less prone to eavesdropping, but also less taxing on the increasingly scarce resource that is #RadioSpectrum and that we should be more mindful of, espechally since the solution to overcrowded #spectrum is rather obnoxious as in "dialing up power consumption to implement ever more complex coding schemes"...
- Not to mention it's easier to secure an optical link and I think that it would be best if said system could universally tunnel any data, from Serial to USB and from 10/100M Ethernet to anything else, as having a "Universal Visible-light Optical Link" (maybe even call it "#UVOL", cuz "VOL" and "UVLOL" or "ULOL" may be too much of a bad take) could really do wounders.
Even if it can only do like 12Mbit/s for USB 2.0 it'll still be practical for many use-cases.
- As you said: You want to focus on the "modem" part first...
@stman @vidak @50htz @theruran @forthy42
I guess some photo detectors with narrow FOV [0,1Β°] and matching LEDs alongside with some means to encapulsate that would be fine...
- If we can make those testing boards and put them in enclosures small enough to fit in a shopping bag, I could potentially test them in a steam bath chamber not far away from my place. 45Β°C with condensating humidity and visibility at arms lenght should be a hardcore test envoirment for that, simulating you the worst possible (heat, humidity, fog, rain) conditions, including rapid temperature changes (can hose it down with tap water)...
That's what is needed.
Dev Test Boards.
To allow us testing different "driver pulse shape" configuration (To boost LED's bandwidth) and different LEDs, colors, chromatic filters...
We shall define the functionnalities of such test boards, both for TX and RX.
These test boards must be FPGA made so that we can really customize well al kinds of settings.
Here is the publication URL I found interesting aboutthe different encoding scheme used for Ethernet in general.
It details well the advantage of using MLT-3 encoding in 100BASE-TX ethernet for twistted pairs.
http://units.folder101.com/cisco/sem1/Notes/ch7-technologies/encoding.htm#mlt3
It's good to understand how MLT-3 reduces the necessary bandwidth of LEDs.
Yeah, well, when I see how some "CIA driven hackers" in Paris "received" my idea of developing 100 Mbps #ronja 's, meaning saying "Yes nice idea", and then when me asking to really "work together" on developing it seriously, everybody left.
As usual.
It's always like that.
And then you know these guys pretend being anarchists and antifascists.
@stman @vidak @50htz @theruran @forthy42 OFC such testing boards would necessitate FPGAs just to be able to provide the necessary processing and diagnostic metrics necessary to finalize any spec and to get decent results you can actually work with.
- Cuz using like a chamber with supercritical COΒ² is not a substitute or actual water vapour and fog, because the absorbtion bands are completely different!
@stman @vidak @50htz @theruran @forthy42 OFC, and with optical links there's basically no fresnel area, so on the recieving end one could employ relatively cheap, high focal lenght optics (i.e. 500mm+ f/8 or darker) to align the reciever with the sender....
Those camera cases may not be "low cost" but at least cheaper than DIY and certainly they exist as COTS products with industrial distributors, cuz i.e. some chemical plant needs a camera to point at a gauge in a hazardous area and it needs to widthstand the elements and shit...
Obviously I don't expect some vandalism-proof Ex-certified case designed to survive being showered in hydrofluoric acid, but just a pair in some IP67 case that can be operated on their own standalone and do stuff like "measure link distance" and "deal with signal attenuation" and most importantly record the SNR and attenuations properly as measurements so that the finalized spec has clear rate set similar to WiFi specs where any auto-negotiation just has a finite set of options and thus chooses the fastest reliable link and automatically renegoiates upon worsening and bettering conditions.
- After all, a slower link is better than no link!
@stman speaking of leaving: I opened your profile and saw that the moderators of https://mastodon.social have basically shadowed your account.
- Kinda confirms your Sandboxing again...
Maybe ask the mods about it?
- Otherwise consider movin over to infosec.space, as Jerry is pretty cool.

@kkarhan I know. This is the CIA's fault.
They are behind my sandboxing.
Because after the CARNAGE they did in my Antitrust case against AMAZON (More than 7 "covert" murders on the french terrory with this case), they know I want my crypto-anarchist revenge.
Back to the 100 Mbps #ronja study :
Here is a computation of the max and min analog signal frequency on an Ethernet 100BASE-TX pair when encoded in MLT-3.
These computations were done taking into account the 802.3 standard defining both how to encode 4 bits nibble from an ethernet frame into 5 bits symbols (4B/5B encoding), and having those symbols then encoded into MLT-3.
The good news is that we can directly drive a LED with the Ethernet signal.
Many high power LEDs can commute up to 50 MHz.
But if the max frequency of the MLT-3 signal is 25 MHz, then it's even easier to choose suitable high power LEDs.
We don't even need multiple chromatic channels, except for transparent real time error correction or redundancy.
All in all, implementing 100 Mbps #ronja is actually simpler than I thought.
This is a very good news.
We just need a fucking driver to drive the fucking LED in current directly from the MLT-3 analog signal of the TX pair, just pre-biaising the LED with a DC current to be just under its linear zone, or in the middle of it. This is pretty cool.
100 Mbps ronjas can be fully analog stuff. Even simpler than I thought.
This is very encouraging.
@kkarhan @50htz @vidak @theruran @forthy42
And it's also even simpler for the receptive part with the PIN Photo Diode : As the minimum frequency is 12.5 MHz, we can filter everything under 10 MHz with a 5 order high pass filter, and we can therefore ensure that the signal reconstitution on the receiving part will be rather environmental noise resistant.
Incredible. 100 Mbps ronjas even simpler to create than 10 Mbps versions, with better SNR.
Knowledge is power.
You can't kill an idea.
@stman @vidak @50htz @theruran @forthy42 so you can literally integrate a PHY if not MAC into it?
- For the MAC address, just buy a AT24MAC402 or any other MAC Chip so you'd even comply with Ethernet Specs in terms of unique EUI-48
https://www.microchip.com/en-us/products/memory/serial-eeprom/mac-address-and-unique-id-eeproms
@stman @50htz @vidak @theruran @forthy42 if it's really that simple, then shure, that should be it.
- I'd still at the very least consider early modularization and scalability as in enabling transparency for the lower layers, so that i.e. an optical link for use underwater could just get blue LEDs (as red light gets filtered) and to allow both cheap short range (red LEDs), medium and long range units to share the same PCB - if not units using UV, IR or god forbid Lasers.
This will help reduce the risk of lock-ins and will allow people to DIY this regardless whether a single part is available and they've to source an alternative.
- Also given diffraction and attenuation based off rain and fog is to be expected, having frequency diversity at lower symbol rate can actually help keeping the same link speeds under said adverse conditions or allow for saving power, reduce light pollution or being able to merely reduce link speed in extreme cases.
Mind you such a #RONJA setup will likely get deployed where RF links are no option and/or not affordable to the installers, so you can basically account for anything from empoverished communities to RF quiet zones to scientific setups, thus being able to in theory be run with any decently-fast switchable LEDs may be important longterm.
The lesson here to remember is that MLT-3 encoding is great for twisted pairs, and for LEDs transmission. The fact that it has minimum and maximum frequencies is great.
You will note that the signal frequency is way lower than the bitrate. 5 times less, at worst.
YES.
I really enjoy, as crypto-anarchist, the simplicity of the design.
James & Chris : Private autonomous infrastructure in the light of P2P applications, combined with nodes secured by hardstack+PUF+SoC ICs with remote IC & software trust capability, bring the resilience against Sybil & Cloud based attacks to another level. Things are getting really exciting.
@stman @50htz @vidak @theruran @forthy42 I also got my hands on at least parts of the IrDA spec thx to some friendly user pointing me at the right direction...
https://www.irda.org/standards/specifications
https://swecyb.com/@troed/114732638941974011
@stman @50htz @vidak @theruran @forthy42
Of notable value may be the #IrLAN stack, given that it's designed to tunnel #Ethernet through a 4Mbit/s FIR link.
- I'm pretty sure that the same can be applied to visible light without much of an issue...
https://www.irda.org/standards/pubs/IrLAN.PDF
https://www.irda.org/standards/pubs/litever10.pdf
IrDA standard is not important for Ronja according to me.
1.4 Km with 5 mW of light power were reached in 2003 with the original Ronja, using COTS high power Red LEDs of 3000 or 5000 milli candellas of optical power.
Today, we have high power Red LED of 700 mW of light power (1 Watt of electrical power), designed to modulate up to 30 MHz (Our requirement is 25 MHz).
We can expect an improvement of the range up to 10 or 15 Km with these powers.
@stman @50htz @vidak @theruran @forthy42 granted, my idea is to use as many light spectrums as feasible, cuz if you can do 100 MBit/s just with a single wavelenght, then 1 GBit/s should be feasible with 10 wavelenghts in parallel.
Not to mention using green ans espechally blue light (but not UV!) will also increase resistance to humidity and precipitation as attenuators.
https://infosec.space/@kkarhan/114733561702920937
First, we have to dig into the 802.3 standard to see how the signal is encoded over twisted pairs for 1000BASE-TX. Maybe it's MLT-3 again ? I dunno. Or ask chat GPT, it will save a lot of time, and for such question it cannot go wrong.
Once we know the encoding, we can debate and make educated guesses.
@stman @50htz @vidak @theruran @forthy42 merely doubling the baudrate per wavelenght compared to #RONJA can yield 100MBit/s at 5 wavelenghts, whilst offering us better link budget & range.
https://infosec.space/@kkarhan/114719599251092905
Not to mention #IrDA as a protocol could be repurposed as a diagnostics / link renegotiation channel and even as ultra-long-range / narrowband & broadcasting (PtMP) solution (see #IrLAN)β¦
Note that the maximum possible range achieveable under lab conditions will be way better than the real-world outputs, simply because not only does humidity and rain exist, but where it doesn't (i.e. deserts) thermal stress and expansion is moving things out of optimal alignment over the day...
- And in a lab you don't have things potentially disturbing it.
@kkarhan I fear AI, but when you formulate well the questions, forcing it to give its source, like asking in which chapter of the 802.3 standard this is defined for 1000BASE-TX, it saves you time, and it works generaly very well.
I mostly use GPT feeding it with my own sources, typically the 802.3 standard PDF.
@stman @50htz @vidak @theruran @forthy42 Also being able to adaptively switch bandwiths and if necessary downgrade link speeds will be an important feature.
- Most RF-based PtP bridges use 2,4 & 5 GHz backup radios because 24 GHz is susceptible to rain fade and 60GHz is having seriois attenuation over distance because it's at the absorbion band of oxygen!
Also having multiple wavelenghts at hand allows for flexible use even in light-polluted aread and allows for a higher installation density as units can self-coorsinate their used wavelenghts even when close-by.
- Not to mention being able to i.e. swirch to a different wavelenght on the fly in case i.e. flashing lights from nearby fire trucks cause A temporary loss of said wavelenght...
Light is basically completely unregulated spectrum, so we'll have to accept and deal with sabotage incl. LIDAR lasers being used as a weapon to burn out detectors/sensors...
It's going to require efforts to conceive well the analog part, with many tries on the field and several prototypes before it works well, and this is enough already.
Then we want the price and complexity to be as low as possible, for political and cyber-security reasons.
KISS
@stman @50htz @vidak @theruran @forthy42 Yet somehow we've to deal with changing conditions and pretty shure you too have to admit that having a link that may throttle due to weather is better than having no link or poor reliability because it needs perfect conditions.
Optical PtP links outside of laboratories have to deal with changing and adverse conditions and somehow you gotta have to communicate between the units if they loose contact or that the error rate is too high so they need to renegotiate a lower link width.
- Espechally since this ain't like EuroDOCSIS where interference is minimal unless people actively sabotage things.
I'm also referencing #IrDA because they did already do some fundamental research.
- And whilst I was looking I found out that there's also a lack of newly-made IrDA transciever chips like the STIR4220 so designing a new one (even if it's just an FPGA reimplementation) is desireable for a lot of "legacy" applications as well as secure data exchange on short distance.
In that case, find me another electronician crypto-anarchist hacker engineer, so that I can share the work load with him.
These high power (~700 mW optical power, compared to the 5 mW original Ronja) LEDs (From Luminus, a great brand) I mentionned earlier :
High Power Red LED (Operational up to 30 MHz) - DigiKey price : $1.60 :
https://download.luminus.com/datasheets/Luminus_SST-10-R_Datasheet.pdf
High Power IR LED (Operational up to 30 MHz) - DigiKey price : $2.74 :
https://download.luminus.com/datasheets/Luminus_SST-10-IR_Datasheet.pdf
I'm going to buy a few of these two LEDs and I am going to verify their frequency response with my super "Analog Discovery 3" oscilloscope that will be perfect for this task.
I will also buy the suitable PIN photo Diode, and a MOSFET to drive the LED correctly.
These datasheets don't mention 30 MHz, but several application using them declared their junction capacitance allowed to modulate >30 MHz, but I need to check to be certain.
Understand that with 5 mW optical power, the original Ronja could transmit from ~1.4 up to ~2 Km distance keeping a good signal integrity, and with only a standard low-cost office 10 cm diameter lens (The diameter of the lens being a key factor and parameter).
So with ~700 mW of optical power, not only we can expect a good range increase, but for a few Km, we can expect more reliability, signal integrity & resiliency against meteorological hazards.
I'm not an photonic engineer, still I think that what's important in Ronja is the alignment of the LED beam right is the center of the lens :
-> Two fine tuning screws for X and Y axis are needed to position well the PCB holding the LED so that its optical emission center axis goes straight to the optical center of the lens.
-> The lens shall also have same
X and Y fine tuning screws to place it perfectly perpendicular to this optical axis.
-> Finaly, a Z fine tuning screw is needed to adjust the distance between the led and the lens.
With such fine setting screws, you can get it perfectly aligned and focused.
The same kind of fine tuning screws exist between the two opposit mirror of a laser gaz tube.
I think the best to achieve this at low cost, is to use standard 10 cm PVC plastic tube, but that the holder of the PCB holding the LED, and the holder of the lens (A low cost glass magnifying lens of 10 cm diameter) should be 3D printable for better reproductibility of our Ronja's.
We should implement a similar mechanical part but 3D printed.
3D printed parts really seem to be the best option, it's cheap, it's precise.
And it's gonna be needed also for the cooling part : We need the PCB holder to include the necessary stuff to fix a small standard fan sending fresh air onto the thermal dissipation radiator that will be placed on the back of the PCB to cool the LED mounted on the other side of the PCB.
@stman @50htz @vidak @theruran @forthy42
As for cooling, I'd propose some heatpipes reaching from the inside middle and a partially exposed back to end in a finstack next to the connection breakouts with a industrial-grade fan that isn't obnoxious and could be set to either a fixed RPM or on full blast.
Obviously the relatively high-power LEDs will require cooling and depending on the installation location passively radiating the heat through a heavy & expensive metal case may not even be the option (and unless the internals are finalized, it would likely be a "THICC & heavy boi" case more than it needs to be.
- It won't look as sexy as DIY Perk's setup but given the fact the system by nature will be hammered by the elements I don't see much of an option as of now.
Maybe down the line we'll have some quite detailed thermals at hand and thus can just get away with heatpipes to a metal case or a chonky cooler exposed to the outside but this ain't today...
@stman @50htz @vidak @theruran @forthy42 yes tho 3D-printing glass is just out of our reach and way too expensive.
Tho I'm convinced there are manufacturers for standarized magifying lenses re: the light source.
- As for the reciever, I propose using C(s)-Mount and some cheap tele lens for zooming right at the transmitter at the other end and avoiding most noise by simply reducing the FOV to less than 0,1Β°
Some tubing and insulation will be needed anyway if we were to put the transcievers in a shared case to avoid light bleeding into the reciever, but any opaque plastic could do that...
- And yes I'd recommend to put transmitter & reciever in one case to ease alignment and installation issues.
@stman @50htz @vidak @theruran @forthy42
I think it would be easier to just have the transciever parts in the case relatively solid mounted and then use adjustment parts like for Ka-Band two-way satellite internet to enable fine adjustment.
- Again: Having some narrowband protocol for setup so one can do the "tone method" of fine adjustment would make installation far easier.
Pointing the ronja in the perfect direction of its receiver counter part is another kind of fine tuning.
Here, I'm talking about the mechanical optical fine tuning of a given ronja. It needs perfect internal alignment in order to deliver the best range performance.
@stman @50htz @vidak @theruran @forthy42 shure. Tho the question is how narrow and fine-tuned will it have to be?
- I.e. what focal lenght equivalent will be used for the reciever part?
Pretty shure fine alignment can be done similar to a rifle scope with windage and drop compensation, tho Ideally we'd hardmount the optics to the casing and the case gets aligned (maybe put a NATO-Rail on top so for setup one just screws a scope in to prealign)...
- I'm confident that you aim for an FOV in the 0,1Β° - 0,01Β° range so mounting and alignment should be done similar to Ka-Band two-way satellite internet dishes.
The most limiting factor for any real-world-deployments will be attenuation by weather, humidity and heat as well as other atmospheric disturbances.
- I'd assume that 10km link lenght @ 99% uptime will be the practical limit due to said issues which in my book is fine. OFC being able to downgrade will potentially allow to reach 99,9% - 99,95% depending on the bitrate ( i.e. down to 10MBit/s, 1MBit/s or even 112,5 kBit/s [SIR equivalent]) it can renegotiate.
Like any "shared media" the transcievers will have to constantly measure the SNR per wavelenght and dynamically enable/disable/switch symbol rate & FEC as evasive action (i.e. having to disable 405nm blue because some ambulance warning lights are in the FOV)...
For the internal alignment of the center of the LED beam (Where power density is maximal) toward the center of the lens, we should use a photodiode to measure the power density just within a little hole of 1 mm corresponding to the mechanical center of the lens, and measuring power reception variation as we fine tune the PCB's position that is holding the Led, tahnks to a PIN photodiode behind the hole, and check the signal with an oscilloscope.