Not a silly question at all!
Compilers are already really smart and do a lot of heavy lifting but they’re also restricted to what you write and they err on the side of safety. They will do things like inline object functions if you don’t have virtual functions and are simple enough which reduces the number of indirections. They won’t re-order your classes and re-write your code. In my experience compilers don’t do a good job at magically vectoring code (using SIMD registers to their fullest extent), so maybe that can be improved by a super smart compiler.
I would say it’s possible to have a linter let you know if you’re making structs which are cache unfriendly.
There are also runtime tools like Intel’s Vtune or perf on Linux. I would say that while those tools are very powerful the learning curve is very difficult. In my experience you need to know a lot about optimization to understand the results.
Today’s generative AI can give you broad strokes about refactoring some code to DOD and I’m sure in a few years it could do something to whole projects.
Oftentimes safety comes at the cost of performance with compilers if you don’t give it enough details such as restrict/noalias, packing, alignment, noexcept, assume/unreachable, memory barriers. Rust is able to be performant and safe because it is a very verbose and restrictive language when you write it. C++ gives you all the tools but they tend to be off by default. In my experience game devs like to stick to C++ despite the lack of safety guardrails because it’s faster to write efficient code and “we’re not making medical equipment” sentiments.
“It’s okay when we do it.”