4 is used for non-deterministic delay - - - is Random.nextInt() also cryptographically secure?
- 0 Posts
- 49 Comments
Anyone who thinks it’s about the prompts (or the programming language du-jour) is missing the real questions.
A.I. is taking (atleast some) junior dev jobs.
I think this is likely to be a temporary-transient effect, until they figure out that they still need thinking people to work with the LLMs to get what they need. Some of this transition period is going to involve discovery that they didn’t need some of those junior devs in the first place, and eventual discovery that they need more junior devs for new things.
I think it’s IBM that told a story of downsizing 15,000 people in various areas, and re-hiring 20,000 people in other areas - including people tasked with running the AI/LLM interfaces.
I love the: lack of exposure to toxic chemicals (usually), the lack of harsh loud work environments (usually), the comfy chair and peaceful office space (eventually), and the fact that the money is good enough pretty much seals the deal. I like working with my hands, making things, wearing ear and eye protection, cutting down trees with chainsaws, replacing engines in cars: on my own terms, not as something I have to do 250 days a year to avoid homelessness.
MangoCats@feddit.itto
Technology@lemmy.world•A Developer Accidentally Found CSAM in AI Data. Google Banned Him For ItEnglish
1·2 days agoMaterial can be anything.
And, if you’re trying to authorize law enforcement to arrest and prosecute, you want the broadest definitions possible.
MangoCats@feddit.itto
Technology@lemmy.world•A Developer Accidentally Found CSAM in AI Data. Google Banned Him For ItEnglish
3·2 days agoGoogle doesn’t ban for hate or feels, they ban by algorithm. The algorithms address legal responsibilities and concerns. Are the algorithms perfect? No. Are they good? Debatable. Is it possible to replace those algorithms with “thinking human beings” that do a better job? Also debatable, from a legal standpoint they’re probably much better off arguing from a position of algorithm vs human training.
MangoCats@feddit.itto
Technology@lemmy.world•A Developer Accidentally Found CSAM in AI Data. Google Banned Him For ItEnglish
2·2 days agoif the debate is even possible then the writing is awful.
Awfully well compensated in terms of advertising views as compared with “good” writing.
Capitalism in the “free content market” at work.
MangoCats@feddit.itto
Technology@lemmy.world•A Developer Accidentally Found CSAM in AI Data. Google Banned Him For ItEnglish
3·2 days agocan be easily interpreted as something…
This is pretty much the art of sensational journalism, popular song lyric writing and every other “writing for the masses” job out there.
Factual / accurate journalism? More noble, but less compensated.
MangoCats@feddit.itto
Technology@lemmy.world•A Developer Accidentally Found CSAM in AI Data. Google Banned Him For ItEnglish
17·2 days agoGoogle’s only failure here was to not unban on his first or second appeal.
My experience of Google and the unban process is: it doesn’t exist, never works, doesn’t even escalate to a human evaluator in a 3rd world sweatshop - the algorithm simply ignores appeals inscrutably.
MangoCats@feddit.itto
Technology@lemmy.world•I Went All-In on AI. The MIT Study Is Right.English
12·4 days agoThe statement that “No one can own what AI produces. It is inherently public domain” is partially true, but the situation is more nuanced, especially in the United States.
Here is a breakdown of the key points:
Human Authorship is Required: In the U.S., copyright law fundamentally requires a human author. Works generated entirely by an AI, without sufficient creative input or control from a human, are not eligible for copyright protection and thus fall into the public domain.
“Sufficient” Human Input Matters: If a human uses AI as an assistive tool but provides significant creative control, selection, arrangement, or modification to the final product, the human’s contributions may be copyrightable. The U.S. Copyright Office determines the “sufficiency” of human input on a case-by-case basis.
Prompts Alone Are Generally Insufficient: Merely providing a text prompt to an AI tool, even a detailed one, typically does not qualify as sufficient human authorship to copyright the output.
International Variations: The U.S. stance is not universal. Some other jurisdictions, such as the UK and China, have legal frameworks that may allow for copyright in “computer-generated works” under certain conditions, such as designating the person who made the “necessary arrangements” as the author.
In summary, purely AI-generated content generally lacks copyright protection in the U.S. and is in the public domain. However, content where a human significantly shapes the creative expression may be copyrightable, though the AI-generated portions alone remain unprotectable.
To help you understand the practical application, I can explain the specific requirements for copyrighting a work that uses both human creativity and AI assistance. Would you like me to outline the specific criteria the U.S. Copyright Office uses to evaluate “sufficient” human authorship for a project you have in mind?
Use at your own risk, AI can make mistakes, but in this case it agrees 100% with my prior understanding.
MangoCats@feddit.itto
Technology@lemmy.world•I Went All-In on AI. The MIT Study Is Right.English
1·4 days agobut it will make those choices, make them a different way each time
That’s a bit of the power of the process: variety. If the implementation isn’t ideal, it can produce another one. In theory, it can produce ten different designs for any given solution then select the “best” one by whatever criteria you choose. If you’ve got the patience to spell it all out.
The AI can’t remember how it did it, or how it does things.
Neither can the vast majority of people after several years go by. That’s what the documentation is for.
2000 lines is nothing.
Yep. It’s also a huge chunk of example to work from and build on. If your designs are highly granular (in a good way), most modules could fit under 2000 lines.
My main project is well over a million lines
That’s should be a point of embarrassment, not pride. My sympathies if your business really is that complicated. You might ask an LLM to start chipping away at refactoring your code to collect similar functions together to reduce duplication.
But we can and do it to meet the needs of the customer, with high stakes, because we wrote it. These days we use AI to do grunt work, we have junior devs who do smaller tweaks.
Sure. If you look at bigger businesses, they are always striving to get rid of “indispensible duos” like you two. They’d rather pay 6 run-of-the-mill hire-more-any-day-of-the-week developers than two indispensibles. And that’s why a large number of management types who don’t really know how it works in the trenches are falling all over themselves trying to be the first to fly a team that “does it all with AI, better than the next guys.” We’re a long way from that being realistic. AI is a tool, you can use it for grunt work, you can use it for top level design, and everything in-between. What you can’t do is give it 25 words or less of instruction and expect to get back anything of significant complexity. That 2000 line limit becomes 1 million lines of code when every four lines of the root module describes another module.
If an AI is writing code a thousand lines at a time, no one knows how it works.
Far from it. Compared with code I get to review out of India, or Indiana, 2000 lines of AI code is just as readable as any 2000 lines I get out of my colleagues. Those colleagues also make the same annoying deviations from instructions that AI does, the biggest difference is that AI gets it’s wrong answer back to me within 5-10 minutes, Indiana? We’ve been correcting and recorrecting the same architectural implementation for the past 6 months. They had a full example in C++, they are going to “translate it to Rust” for us. I figured, it took me about 6 weeks total to develop the system from scratch, with a full example like they have they should be well on their way in 2 weeks. Yeah, nowhere in 2 weeks, so I do a Rust translation for them in the next two weeks, show them. O.K. we see that, but we have been tasked to change this aspect of the interface to something undefined, so we’re going to do an implementation with that undefined interface… and so I refine my Rust implementation to a highly polished example ready for any undefined interface you throw at it within another 2 weeks, and Indiana continues to hack away at three projects simultaneously, getting nowhere equally fast on all 3. It has been 7 months now, I’m still reviewing Indiana’s code and reminding them, like I did the AI, of all the things I have told them six times over the past 7 months that they keep drifting off from.
MangoCats@feddit.itto
Technology@lemmy.world•I Went All-In on AI. The MIT Study Is Right.English
11·5 days agoFirst, how much that is true is debatable.
It’s actually settled case law. AI does not hold copyright any more than spell-check in a word processor does. The person using the AI tool to create the work holds the copyright.
Second, that doesn’t matter as far as the output. No one can legally own that.
Idealistic notions aside, this is no different than PIXAR owning the Renderman output that is Toy Story 1 through 4.
MangoCats@feddit.itto
Technology@lemmy.world•I Went All-In on AI. The MIT Study Is Right.English
1·5 days agoNobody is asking it to (except freaks trying to get news coverage.)
It’s like compiler output - no, I didn’t write that assembly code, gcc did, but it did it based on my instructions. My instructions are copyright by me, the gcc interpretation of them is a derivative work covered by my rights in the source code.
When a painter paints a canvas, they don’t record the “source code” but the final work is also still theirs, not the brush maker or the canvas maker or paint maker (though some pigments get a little squirrely about that…)
MangoCats@feddit.itto
Technology@lemmy.world•I Went All-In on AI. The MIT Study Is Right.English
1·5 days agoYeah, context management is one big key. The “compacting conversation” hack is a good one, you can continue conversations indefinitely, but after each compact it will throw away some context that you thought was valuable.
The best explanation I have heard for the current limitations is that there is a “context sweet spot” for Opus 4.5 that’s somewhere short of 200,000 tokens. As your context window gets filled above 100,000 tokens, at some point you’re at “optimal understanding” of whatever is in there, then as you continue on toward 200,000 tokens the hallucinations start to increase. As a hack, they “compact the conversation” and throw out less useful tokens getting you back to the “essential core” of what you were discussing before, so you can continue to feed it new prompts and get new reactions with a lower hallucination rate, but with that lower hallucination rate also comes a lower comprehension of what you said before the compacting event(s).
Some describe an aspect of this as the “lost in the middle” phenomenon since the compacting event tends to hang on to the very beginning and very end of the context window more aggressively than the middle, so more “middle of the window” content gets dropped during a compacting event.
MangoCats@feddit.itto
Technology@lemmy.world•I Went All-In on AI. The MIT Study Is Right.English
1·5 days agoDepends on how demanding you are about your application deployment and finishing.
Do you want that running on an embedded system with specific display hardware?
Do you want that output styled a certain way?
AI/LLM are getting pretty good at taking those few lines of Bash, pipes and other tools’ concepts, translating them to a Rust, or C++, or Python, or what have you app and running them in very specific environments. I have been shocked at how quickly and well Claude Sonnet styled an interface for me, based on a cell phone snap shot of a screen that I gave it with the prompt “style the interface like this.”
MangoCats@feddit.itto
Technology@lemmy.world•I Went All-In on AI. The MIT Study Is Right.English
1·5 days agoI don’t know how rare it is today. What I do know is that it’s less rare today than it was 3 months ago, and 3 months ago it was even more rare 3 months before that…
MangoCats@feddit.itto
Technology@lemmy.world•I Went All-In on AI. The MIT Study Is Right.English
1·5 days agoIf you outsource you could at least sure them when things go wrong.
Most outsourcing consultants I have worked with aren’t worth the legal fees to attempt to sue.
Plus you can own the code if a person does it.
I’m not aware of any ownership issues with code I have developed using Claude, or any other agents. It’s still mine, all the more so because I paid Claude to write it for me, at my direction.
MangoCats@feddit.itto
Technology@lemmy.world•I Went All-In on AI. The MIT Study Is Right.English
1·5 days agothe sell is that you can save time
How do you know when salespeople (and lawyers) are lying? It’s only when their lips are moving.
developers are being demanded to become fractional CTOs by using LLM because they are being measured by expected productivity increases that limit time for understanding.
That’s the kind of thing that works out in the end. Like outsourcing to Asia, etc. It does work for some cases, it can bring sustainable improvements to the bottom line, but nowhere near as fast or easy or cheaply as the people selling it say.
MangoCats@feddit.itto
Technology@lemmy.world•I Went All-In on AI. The MIT Study Is Right.English
1·5 days agoI tried using Gemini 3 for OpenSCAD, and it couldn’t slice a solid properly to save its life, I gave up on it after about 6 attempts to put a 3:12 slope shed roof on four walls. Same job in Opus 4.5 and I’ve got a very nicely styled 600 square foot floor plan with radiused 3D concrete printed walls, windows, doors, shed roof with 1’ overhang, and a python script that translates the .scad to a good looking .svg 2D floorplan.
I’m sure Gemini 3 is good for other things, but Opus 4.5 makes it look infantile in 3D modeling.

The question the optimizer can’t really answer is: will Random.nextInt() ever return 10? If that’s a 64 bit integer it could be a LOOOOOONG time before 10 ever shows up.