• 4 Posts
  • 14 Comments
Joined 1 year ago
cake
Cake day: June 18th, 2023

help-circle
  • For MIT/Apache it doesn’t matter. That’s always a problem with those free to use licenses you have a “good idea” who’s using it, but you never really can tell. It also creates a shit load of wasted improvements every time a company uses it, moth balls the project, but never pushes code upstream because why do that? \s So you sit back and hope that someone in the company feels a big enough moral drive or obligation to contribute their improvements up stream. But, how can you tell definitively? You can sometimes see it in the job descriptions they are hiring for, also I have had companies reach out out me personally for help. Many open source projects also will reach out and ask, and if they get the ok, will put it in the project description in order to encourage others companies to do the same. So why to companies bother? The funny thing about open source is that it lets people who like solving tough problems (the best type of engineers) know where the tough problems are being definitively solved, because here’s the code, and here’s the author from xyz company contributing and showing the rest of the world how it’s done. Often this will bring in engineers who are at the top of their game to these companies.







  • The problem with this is that companies like rabbitai are exploiting our inherent drive to teach in order to pass on knowledge and make society and life better for the next generation and ourselves. (In this case code reviews) This doesn’t work in this situation because you’re not actually helping out another person that will reciprocate help to you down the line. You’re helping out a large company, which has no moral values and doesn’t operate in society with the same values as a human being. To me a code review is more than just pointing out mistakes it’s also about sharing knowledge and having meaningful dialog about what makes sense and what doesn’t. There’s no doubt that AI is an amazing achievement, but to me it seems that every application of this technology that involves human interaction manages to simultaneously exploit and erase the core “humanness”, of the interaction. I think this is the case because these types of AI applications are purely monetarily driven, and not for the advancement of our society. OpenAI had the right idea to start with, but they have sunken into the same trope in lock step with the rest of the Googles, Apples and Amazons of the world. Imagine if one of these large companies like say Google had been given money by the us government to create the arpa net and then went on to only use the technology for profit. Would we really be in the same connected world we are now?