There is no initialisation, and no charging in the sense that you are talking about.
Storage in a modern computer (in the working parts -- memory can operate quite differently -- is like 2 paper cups and a ball. The ball is initially in one of the cups (this is random. Energy is used to move the ball to the other cup. By looking at the cups you can determine which side the ball is in, and hence whether that particular bit is a 1 or a 0.
Energy isn't used for charging, it's used for changing states. All the energy is pretty much dissipated as heat.
The 2 cups and one ball analogy is a flip-flop. It has 2 states, both of them stable. A logic signal can be used to change the device from one to the other, but removing that signal does not cause it to change back.
Much of the rest of the innards of the CPU is comprised of gates (actually so are flip-flops) that produce a certain output in response to the various inputs.
The simplest type is the inverter. The actual inputs and outputs are high or low logic levels (not balls in cups obviously). Whatever state the input of the inverter is, the output is the other state.
States are given names, 0 and 1, or true and false, or asserted and not asserted, etc., but they all mean the same thing -- on or off, there is no middle state.
Other gates (and from your comments you should know about them) are OR, AND NAND, NOR, and XOR. Combinations of these can create flip-flops (2 2-input NOR gates is the easy way), adders, shift registers, all the way up to CPUs.
Certain types of flip-flops (more complex than just 2 NOR gates) will only change state when a clock input says they can. These are used all over most computers so that everything gets to settle into a final state before it is used to generate the next state (because nothing happens instantly you need to make sure that a slower set of logic doesn't fail to be ready before something that depends on its outputs). This the use of these clocked flip-flops and a clock signal to keep everything synchronised.
The clock speed is often a major selling point of a CPU. One upon a time, 4MHz was considered fast, now several GHz is not uncommon.
Rather than thinking about balls and cups, the *real* thing that is happening is that there are transistors (often mosfets) that effectively connect the output to either the +ve or -ve supply rail. The inputs of the gates are used to control these transistors. With the gate just sitting there in one state or the other, the current drain can be quite low (in some logic families theoretically zero).
When the state changes, one transistor will turn off and the other will turn on, switching the output from one supply rail to the other. In a perfect world this would be instantaneous and would consume no power. However reality intervenes, and this change of state requires power. Ironically, much of the reason for this is capacitance. Whether that be capacitance in wires (2 conductors will have some capacitance between them) or in the transistors themselves, that capacitance needs to be charged or discharged to allow the state change to be effective. This takes time and energy and contributes directly or indirectly to power consumption, but more importantly to speed, since it takes TIME. Faster computers rely on devices with lower and lower capacitance (and lower and lower voltages) so the can switch faster and faster.
Thus it is capacitance which works against the operation, it is NOT something on which they rely.