Just a quick clarification of this point. The "squaring off" of waveforms can occur in either HD or SD digital recording. This is called "clipping," and occurs when the input signal is excessively loud. The waveform then flattens at the 0 db limit of the recorder (to use a metaphor: think of the digital recording as a physical window, and a loud sound as an object that is larger than the opening of the window - in order to "fit" in the window the tops and bottoms must be clipped off), which causes unpleasant noise when being reproduced.
The difference between SD & HD is the "smoothness of the waveform." In digital recording an analog sine wave (or blend of waves) is captured almost like a photograph. The wave becomes represented by digital data which takes "a photo" of the wave. It's not really a photo, although when looking at pictorial displays such as in audio editing software you are essentially looking at a photo of the wave. As with all digital, the greater the sampling, the more detail of the sine wave you will capture. A waveform captured at a lower sample rate - when examined extremely closely - may exhibit some "stair stepping." Just like a digital photo, when zoomed in, may begin to look like a series of blocks. The higher the sampling, the more you would have to zoom in before you would notice the steps between one sample and the next one.
Now as to whether these differences are audible on playback, that is a matter of great debate. I would simply state that higher resolution is more archival, because once you capture at a given resolution, you cannot regain lost resolution from the original (unless you resample the original source directly).
This is a huge topic with lots of nuance and caveats that maybe belongs in another thread.