Back in the old days, when you bought a hard drive (usually MFM or RLL type), it would usually have a sticker on it with the locations of bad sectors printed on it, as the drive was tested in the factory. When you low level formatted the drive, you'd enter these locations into the formatting tool, and it would mark the sectors on the disk as "bad" so they would return an error when read. This is done because, sometimes a bad sector could be read successfully depending on what was written to it, so it would be unreliable, and not always caught in a simple read test done during formatting.
Then when the high-level format was done (the Format command in DOS or formatting in Windows), which lays down the file system, would read scan the disk, identify the unreadable sectors (due to errors or due to being flagged as "bad" during LLF) and mark them in the file system as bad so the operating system wouldn't attempt to use them. These would show up as "bad sectors" on a CHKDSK report, and they wouldn't be used.
If a sector failed later on, you'd get I/O errors, and you'd have to use a tool (like Norton Utilities) to attempt to recover the data and mark the new bad sector as bad. If multiple sectors started going bad over a short time, it meant the drive was failing and had to be replaced.
When IDE hard drives started appearing (which evolved into today's SATA and other newer types) the controller was moved onto the drive itself so the low level format and remapping of bad sectors was handled at the factory, and the drive only needs to be partitioned and formatted for your operating system. Additionally, the SMART system started being used which allowed the drive to diagnose itself and detect certain types of failures including bad sectors on the fly. With these drives (which include modern SSDs), some "spare" space is reserved for reallocation of bad sectors that are detected, and bad sectors are not normally exposed to the operating system, so you generally won't see "bad sectors" when looking at the properties of a file system on a modern drive. If a bad sector is detected during operation, the drive firmware itself handles the remapping. The only time the OS is aware of an error is if a sector that has existing data on it goes bad and can't be read.
Sometimes a drive will show an error in the SMART report but the sector isn't reallocated. This is probably a sector that had a soft (recoverable) error. For example, power was cut while the sector was written to so it got corrupted. It was unreadable, but after being overwritten later on, it was good again. Reallocation only occurs if repeated writes and re-reads fail, indicating a physical bad sector. If your drive only has a small number of reallocations and they don't increase, the drive is still good. But if you start seeing the error counts increasing on a regular basis on a drive, especially if the reallocated sector count starts climbing, it's time to get that drive out of there since it's going bad.
And it goes without saying, but backups are always important! Not all drive failures can be predicted with SMART. A drive can fail suddenly and completely. A power surge can fry the board for example. Or you get the "click of death" next time you power it on.
As for the OP's question, keep an eye on the SMART data for the drive. If the error totals are small and stay constant and don't go up when doing a lot of reads and writes to the drive, it's probably ok. If they are going up, that drive can't be trusted.