Easier STL conversion

I’ve been making a little tool that tries to make it easier reverse engineer an STL in Shapr3D. I still have a long way to go, and I’m honestly surprised I’ve made it this far.

It currently will slice an SLT in X, Y, or Z plane and let you visualize the outline of the section. It exports drawing as SVG or DXF. Next, I want to be able to measure the distance between any 2 points in the 2D view of the 3D object. Probably will make the 2D view less 3D too.

11 Likes

Very usefull!

1 Like

Nice! Keep us posted.

1 Like

Hacked up some code that takes the 2D projection from the 3D object and dumps a DXF file. I was able to import that into Shapr3D :slight_smile:

This Perseverance model is the starting point. The code projects the 2D drawing underneath the 3D object. Then it create a DXF (or SVG) from that 2D drawing. I can do different rotations to get different 2D projections.

Here it is imported into Shapr3D
inShapr3D

Now I need to adapt it to support STL instead of only GLB files.

3 Likes

@Yepher

This may be one of the most useful projects I‘ve seen. I wish I had any any amount of skill to offer help.

Please keep us updated on your progress, this would be an awesome tool.

1 Like

Really interesting idea! Certainly better than manually sketching over a photo.

Do you plan to post to github or…?

I am still trying to figure out what to do with it. I think I will make it available as a webpage, but the problem is I’ve made a few different versions (python, javascript, swift, and GoLang), but each is different in how you interact with it, so I need to pick a path. And now… I got sidetracked with writing a Parasolid to Step transcoder, which has turned out to be a fairly complex project.

DALL•E has inspired me, so the reason I originally started this was to try and have a computer “draw” the 2D result of a 3D view. Give it a .dxf and it converts it from millions of little polylines into geometry. It is well beyond my ability at the moment but trying to chip away at the problem a little at a time.

I do this using openscad on my mac, import an STL boolen it at the appropriate point and then project to dxf. But it would be nice to automate this more, openscad is not a UI tool, but a programmers one. Maybe you can get some ideas from it ( or code it is open scource )

1 Like

@symonty great idea. Yea, I’ve been digging deep into openSCAD lately, too. The problem I am after, openSCAD has the same issue as the rest of the tools I’ve found (or made). It generates a drawing in a fairly naive way. It is “the easy way” to create the DWG/DXF from a picture or STL.

May be more than you wanted to know below here:

I am really after a compressor or a tool that reduces complexity by understanding what all those polylines with too many control points are trying to represent. I originally was after solving this for 3D STL triangles, but that is way beyond my ability, so I’ve decided to reduce the complexity and work in the 2D space (probably still beyond my ability). Many cool things are happening (like DALL•E) where machines draw amazing pictures (out of randomness).

I wonder if a machine can start with an STL (or STL->DXF for now) and make a “good” CAD drawing of it that approaches a drawing that a human would have drawn if they were reverse engineering the drawing.

I’ve read a lot of the research for GAN/CNN on how drawing is made, but I don’t think that is the same way a machine would make a CAD drawing. So trying to figure out how to “train” such a model and what it would look like.

I have a pretty good feel for how the “error discriminator will work” and have that sort of working. But not sure about how the “generator” will work. One idea is to have the generator create openSCAD syntax and then see how well it produced the drawing.

For research now trying to get a computer vision model to draw shapes like circles, ellipses, and squares over those items in the DXF that should have been those instead of a lot of little lines with control points.

There must be research already in this space, but I’ve not found anything helpful in my search. Ideally, a good solution will be able to take a 3D scanned object (point cloud) and produce a “good” cad model from it. (someday)

And one more note… I don’t mean compression in the traditional sense. This is more what I mean by compression. It is a completely synthetic representation of the “thing”. This is leaps and bounds beyond the next best paper on this topic: face-vid2vid

Another challenge in this space is there is so much interesting stuff (and complexity) at every turn. While looking into the language model, I got sidetracked onto this little project with the goal of making it easy for almost anyone to do their own experiments to see if one of the most common transfer language models available today (BERT) has a bias (like gender bias). This you can save to your own GoogleDrive and do your own tests Google Colab

Sorry for the long-winded reply. I am super excited about where this could go… But also trying to balance my real job and other things I need to focus on.

1 Like