@symonty great idea. Yea, I’ve been digging deep into openSCAD lately, too. The problem I am after, openSCAD has the same issue as the rest of the tools I’ve found (or made). It generates a drawing in a fairly naive way. It is “the easy way” to create the DWG/DXF from a picture or STL.
May be more than you wanted to know below here:
I am really after a compressor or a tool that reduces complexity by understanding what all those polylines with too many control points are trying to represent. I originally was after solving this for 3D STL triangles, but that is way beyond my ability, so I’ve decided to reduce the complexity and work in the 2D space (probably still beyond my ability). Many cool things are happening (like DALL•E) where machines draw amazing pictures (out of randomness).
I wonder if a machine can start with an STL (or STL->DXF for now) and make a “good” CAD drawing of it that approaches a drawing that a human would have drawn if they were reverse engineering the drawing.
I’ve read a lot of the research for GAN/CNN on how drawing is made, but I don’t think that is the same way a machine would make a CAD drawing. So trying to figure out how to “train” such a model and what it would look like.
I have a pretty good feel for how the “error discriminator will work” and have that sort of working. But not sure about how the “generator” will work. One idea is to have the generator create openSCAD syntax and then see how well it produced the drawing.
For research now trying to get a computer vision model to draw shapes like circles, ellipses, and squares over those items in the DXF that should have been those instead of a lot of little lines with control points.
There must be research already in this space, but I’ve not found anything helpful in my search. Ideally, a good solution will be able to take a 3D scanned object (point cloud) and produce a “good” cad model from it. (someday)
And one more note… I don’t mean compression in the traditional sense. This is more what I mean by compression. It is a completely synthetic representation of the “thing”. This is leaps and bounds beyond the next best paper on this topic: face-vid2vid
Another challenge in this space is there is so much interesting stuff (and complexity) at every turn. While looking into the language model, I got sidetracked onto this little project with the goal of making it easy for almost anyone to do their own experiments to see if one of the most common transfer language models available today (BERT) has a bias (like gender bias). This you can save to your own GoogleDrive and do your own tests Google Colab
Sorry for the long-winded reply. I am super excited about where this could go… But also trying to balance my real job and other things I need to focus on.