6
Remember when you had to manually label a thousand images for a basic model?
Two years back, my team spent a solid month just tagging pictures of car parts. For a simple defect detector. Now, with these new self-supervised learning tools, we got a prototype running in a week. The model learned from unlabeled data we already had. It's crazy. The shift happened maybe last year, when the papers got real. Anyone else building with this stuff now? What's your stack?
3 comments
Log in to join the discussion
Log In3 Comments
joseph_green1326d ago
My buddy's startup was in the same boat with medical scans. They had a mountain of old, unlabeled images and no budget for annotation. One of their engineers tried a contrastive learning approach on a whim, and it just worked. The speed change has been unreal.
2
thomasb4125d ago
That "on a whim" part is what gets me, @joseph_green13. Makes you wonder how many other old methods are sitting on a shelf just waiting for a cheap new trick.
4
drews5525d ago
Wonder if the old scans had notes attached that could be mined for weak labels. Even messy text hints might give that contrastive model a better starting point.
1