The difference between raytracing and regular polygonal graphics is that any lighting effect in the latter is a very elaborate trick. There is no unified model for the polygonal graphic pipeline, every effect is a specific algorithmic hack with its own tradeoffs and caveats.
For example, reflections in FPS games work through portals (afaik), as in the floor is literally just rendering the same room twice. The tradeoff is that portal rendering depends on the layout of the environment's geometry itself. You can't just throw any random geometry at it and get reflections, the surface has to be a flat polygon, and be sectioned in a special manner.
Then there are cubemap reflections, where a camera takes 6 pictures from the middle of the room, creating a 360 panoramic image of it. Then the cube is sampled for pixels depending on where the viewer is. The problem with this approach is that cubemaps are usually pre-rendered, since drawing +6 images at once is expensive, so reflections are static. If you lower the resolution and make them real time, it adds another problem: an dynamic object next to the sample point can hide the rest of the room from its view. Since what the cubemap "sees" and what the viewer "sees" is obviously different (viewer can see the corner of a room that is hidden from the cubemap), it will look wrong.
There are also screen space reflections, which is like a very advanced version of using the mirror tool in photoshop :-DDDD. Problem with screen space algorithms is that there is no geometry data, just the pixels on the screen, so again, something that is not on the screen can not be sampled.
As games moved away from techniques like BSP maps, constructive solid geometry, etc., losing some geometry info in the process (since those are bottlenecked by cpu), some techniques became not viable. I think these days only Valve uses BSP maps, modern engines just throw a bunch of geometry at the GPU, so there isn't any kind of meta-info to work with.
And those are just reflections, shadows are an entirely different process in themselves. Then there's ambient occlusion, and global illumination, all of which are different algorithms even though reflection, shadow, reflection, refraction, etc. are literally the same process IRL.
Raytracing, on the other hand, is literally a simulation of how lights bounce. You just throw arbitrary geometry and materials at it, and it gives you an image. Geometry can change in real time and it doesn't matter, because rendering is real time anyway, there won't be any separation between static objects, dynamic objects, lights, fake lights, shadows, etc., since a raytracing engine doesn't care. It's all just polygons. Which means the entire graphics architecture becomes super simple, and a lot of graphics programmers lose jobs :-DDDDD.
I think real time raytracing is ultimately the future. But I also think RTX is a gimmick, and the industry will adopt an open standard once raytracing becomes actually viable for rendering the whole scene real time, not just as a fancy effect on top of the old pipeline.