A New Nuclear Technology Discussion Thread

Megalodon

Ars Legatus Legionis
35,084
Subscriptor++
Because from like a 'tick off this list of engineering and science issues'-perspective, tokamaks (and arguably stellarators) are the furthest along. Alternative approaches pretty much all have a larger number and arguably less certain-to-succeed hurdles to overcome.

Yes, that would be my analysis. I would even call it cliche at this point. Every approach I am aware of has discovered challenges to confinement that were not obvious before experiments of sufficient scale were done. The only exceptions are approaches that have not been demonstrated at sufficient scale for these problems to be documented yet, and ill-advised inertial confinement concepts that cheat by relying on fission and don't scale well below multiple terajoules per shot.

That doesn't mean there's no possible confinement concept that has better scaling properties, different approaches having different scaling characteristics happens all the time in nature, but it does mean the technical risk of unproven concepts is extremely high and anticipating commercial viability for those before they're demonstrated seems premature.
 
That's an interesting take, I am curious what you think the shortcomings are and other approach is better positioned in terms of being commercially viable?

My thinking is that inertial has very difficult problems in terms of fuel fabrication and achieving reliable operation due very difficult transients if you were to ramp up to several shots per second, and of the various magnetic or other approaches, tokomaks are the only one close enough to scientific breakeven that we can be confident of scaling into breakeven territory. Stellarators are arguably not that far off, but then it looks a lot like they have a much tougher job in terms of fabricating the magnets etc which would be relevant for a commercial design. Other magnetic confinement or other eg electrostatic confinement concepts are either unproven at similar scale or outright exude all the hallmarks of snakeoil.
Let me explain what I meant by that. To engineer a fusion power plant, you first have to have the plasma physics work out so this all assumes that the physics works for the concepts. Given that, tokamaks have characteristics that make them harder and less economical to turn into a power plant. That's the reason that most of the commercial fusion concepts with significant funding are based on alternative concepts and are not classical tokamak designs. The funders think that if the plasma physics works out, they can make an economically competitive power plant. Tokamaks are plasmas with a low ratio of plasma pressure to magnetic pressure (this ratio is called beta), so they need higher field magnets to have the same plasma pressure as the higher beta concepts. The beta in a tokamak is a couple percent. The beta in an FRC is > 50% and a Z-pinch has no magnetic field coils at all. This is one of the things that makes FRCs more suitable for the advanced fuels than a tokamak. Spheromaks have more efficient means of current drive and might not require auxiliary heating. Those are big costs in tokamaks. The tokamak toroidal geometry is a more complex geometry to build and perform maintenance on than the other designs and it also has more problems with plasma wall interactions than some designs which don't have the plasma close to the wall. The alternative concepts are now being built at a scale larger than the previous government funded programs did. They will show in the next few years if they can hit their breakeven targets. Even if they don't it was done at a fraction of the cost of ITER which is a risk worth taking given the potential benefit.

I think the inertial fusion companies are spending their money mostly on driver development. The lasers at NIF are only about 1% efficient in converting electric energy to laser energy. There are other kinds of high-power lasers that are close to 10% efficient. The repetition rate also needs to be increased. They would also use direct drive instead of the indirect drive hohlraum approach that NIF used. Development of methods to mass produce viable targets is also needed.
 

Dan Homerick

Ars Praefectus
5,349
Subscriptor
How much is the plasma pressure and temperature necessary in order to overcome the proton-proton repulsion, and how much is it just to ensure there are lots of opportunities for collisions?

That is, if you could precisely steer the atoms into each other with just the right amount of momentum to facilitate fusion, could the input power be relatively low?

I'll admit, what's prompting this question is more along the lines of science fiction than practical engineering. Outer space gives some interesting working conditions, in that you can work with really huge distances. What if you created a collimated beam of ions -- finely guided with tiny steering nudges applied every couple thousand meters for 100s or even 1000s of km -- and you smashed them into another beam heading the opposite way.

I'm picturing a beam that is several meters across, with ions gently nudging each other into a regular lattice, travelling in formation. Line up the lattices just so for collision, such that most atoms will hit their oncoming partner with a high % chance of fusing.

Or is it not really possible to "precisely" guide two atoms together -- you need to squeeeeze them such that they can't deflect as they approach each other?
 

Ananke

Ars Tribunus Militum
2,149
Subscriptor
There are various approaches for trapping atoms in fairly precise locations (e.g. the work of Barredo et al in the group at the Institut d'Optique)

However, the traps are extremely shallow (thus, very cold atoms required), and quite wide (1/e^2 radius of about 1 μm, compared to a nuclear radius of about 250 pm (for RB, the species they use) - so the atom fills just 0.025% of the trap.

Those two aspects are fundamentally incompatible with creating two such lattices and smashing them together hard enough and accurately enough to generate a meaningful fusion yield (or, well, any fusion yield).

To be sure, there are probably other ways of doing it: optical tweezers is just one approach I am intimately familiar with. If you can work with ions, you can generate vastly deeper traps with electrodes (think, like, the difference in depth of texture between Manhattan and bubble wrap) than you can with lasers.
 

Megalodon

Ars Legatus Legionis
35,084
Subscriptor++
How much is the plasma pressure and temperature necessary in order to overcome the proton-proton repulsion, and how much is it just to ensure there are lots of opportunities for collisions?

Not clear to me there is a distinction.

Or is it not really possible to "precisely" guide two atoms together -- you need to squeeeeze them such that they can't deflect as they approach each other?

It's a quantum thing, the relevant concept is "cross section". For any given interaction there will be a cross section that allows you to calculate what fraction of interactions will have a particular outcome. Hence there will be a fusion cross section for two isotopes (like D-T), and lots of other reactions like eg neutron capture cross section, which is relevant to fission power.

The units are given as an area (a "barn" is 10 femtometers by 10 femtometers) but there's more to it than that, as some isotopes may have a wildly disproportionate cross section for certain reactions, like Xenon-135 having a thermal neutron capture cross section of 3 million barns, whereas Lead-206 is 0.03 barns. This gets into the orbitals of the nucleons within the nucleus and the various resonances and magic numbers that make some outcomes more likely, etc. It's all very counterintuitive.

So yes, you are rolling the dice when you throw two ions at each other, there's no way within physics to do this so they can't miss. This is why the "triple product" is important in fusion, which is temperature * density * time, as you can work out how many fusions you get from that and knowing the fusion cross sections of the inputs. Assuming a thermal plasma anyway.'

The triple product is how you get the two major approaches to fusion, inertial which goes for density, and magnetic which goes for time, with both needing temperature.
 

Dan Homerick

Ars Praefectus
5,349
Subscriptor
Not clear to me there is a distinction.
Below a certain amount momentum / velocity / thermal energy (take your pick) I wouldn't expect fusion to be possible because the positive charges of the nuclei would keep them too far apart from each other to fuse.

So the distinction is, do you need a lot of thermal energy (momentum) for fusion to happen, or is merely required if you want a lot of fusion events to happen quickly due to random collisions.

It's a quantum thing, the relevant concept is "cross section". For any given interaction there will be a cross section that allows you to calculate what fraction of interactions will have a particular outcome. Hence there will be a fusion cross section for two isotopes (like D-T), and lots of other reactions like eg neutron capture cross section, which is relevant to fission power.

The units are given as an area (a "barn" is 10 femtometers by 10 femtometers) but there's more to it than that, as some isotopes may have a wildly disproportionate cross section for certain reactions, like Xenon-135 having a thermal neutron capture cross section of 3 million barns, whereas Lead-206 is 0.03 barns. This gets into the orbitals of the nucleons within the nucleus and the various resonances and magic numbers that make some outcomes more likely, etc. It's all very counterintuitive.

So yes, you are rolling the dice when you throw two ions at each other, there's no way within physics to do this so they can't miss. This is why the "triple product" is important in fusion, which is temperature * density * time, as you can work out how many fusions you get from that and knowing the fusion cross sections of the inputs. Assuming a thermal plasma anyway.'

The triple product is how you get the two major approaches to fusion, inertial which goes for density, and magnetic which goes for time, with both needing temperature.
I'm always a little bit suspicious when probabilities are used, and the explanation is "it's quantum". Because, while I accept that at some scales, things are random and a probability distribution is the only correct way to think about the state... I also know that statistics are a regularly employed to simplify models, or to explain something without needing an incredibly precise and accurate understanding of the starting conditions.

I don't have a good way of telling those two situations apart: statistics as the only way vs statistics as simplification. I don't have an intuitive sense for when the "it's quantum" explanation is really the best that is possible. There's Heisenberg's uncertainty principle, but it's based on a Planck length and as I recall it's most applicable for stuff that is waaaay smaller and lighter than a nucleus, or for things which are moving at closer to relativistic speeds.

I guess that's kind of a me problem, though. :D
 

Megalodon

Ars Legatus Legionis
35,084
Subscriptor++
Below a certain amount momentum / velocity / thermal energy (take your pick) I wouldn't expect fusion to be possible because the positive charges of the nuclei would keep them too far apart from each other to fuse.

It's not a hard cutoff, it's a statistical process. The reaction rate never goes to zero. Theoretically that remains true down to absolute zero, though it becomes unmeasurable long before that. It's more a question of whether you achieve "ignition", that is, enough heating to achieve a positive feedback with the reaction rate, which requires a reaction rate fast enough to build up heat.

Stars have relatively low core temperatures relative to our reactors because their density is high, the confinement time is ~infinite, and they're not that good at getting rid of heat for square cube reasons. The actual power density of the sun is tiny, in the core it's similar to a compost pile (~300 watts per cubic meter).

So the distinction is, do you need a lot of thermal energy (momentum) for fusion to happen, or is merely required if you want a lot of fusion events to happen quickly due to random collisions.

The triple product of temperature * density * time is a far more useful way to think about this, I think.


I'm always a little bit suspicious when probabilities are used, and the explanation is "it's quantum". Because, while I accept that at some scales, things are random and a probability distribution is the only correct way to think about the state... I also know that statistics are a regularly employed to simplify models, or to explain something without needing an incredibly precise and accurate understanding of the starting conditions.

Well you're out of luck in this case because even a single particle-particle interaction needs to be understood in statistical terms, because that's how things work at that scale. Worse, there is no non-statistical definition of temperature, which means at a given temperature there is a probability distribution of particle energies and the high end of that distribution is important for the fusion rate.

I don't have a good way of telling those two situations apart: statistics as the only way vs statistics as simplification. I don't have an intuitive sense for when the "it's quantum" explanation is really the best that is possible.

Well in this case the concepts of temperature, entropy, etc are all fundamentally statistical. The laws of physics are time symmetric which means we can't talk about the arrow of time without a statistical account of entropy.

There's Heisenberg's uncertainty principle, but it's based on a Planck length and as I recall it's most applicable for stuff that is waaaay smaller and lighter than a nucleus, or for things which are moving at closer to relativistic speeds.

But we are talking about stuff smaller and lighter than a nucleus, namely the constituents of a nucleus. An atomic nucleus is a composite entity. Protons and neutrons within a nucleus have orbitals much like electrons do, though at a far smaller scale because of the larger mass, and these orbitals arise from their quantum mechanical properties just like those of electrons. Electrons have a probabilistic distribution rather than a definite location, and the same is true of the constituents of a nucleus, which means it is impossible to completely control or predict what will happen in any given nucleus-nucleus interaction.

This is icky in similar ways that all quantum stuff is icky, but the one thing every attempt to provide a non-statistical account of such interactions has in common is total and utter failure. So you have a long way to go if you think you can do better.
 
Below a certain amount momentum / velocity / thermal energy (take your pick) I wouldn't expect fusion to be possible because the positive charges of the nuclei would keep them too far apart from each other to fuse.

So the distinction is, do you need a lot of thermal energy (momentum) for fusion to happen, or is merely required if you want a lot of fusion events to happen quickly due to random collisions.


I'm always a little bit suspicious when probabilities are used, and the explanation is "it's quantum". Because, while I accept that at some scales, things are random and a probability distribution is the only correct way to think about the state... I also know that statistics are a regularly employed to simplify models, or to explain something without needing an incredibly precise and accurate understanding of the starting conditions.

I don't have a good way of telling those two situations apart: statistics as the only way vs statistics as simplification. I don't have an intuitive sense for when the "it's quantum" explanation is really the best that is possible. There's Heisenberg's uncertainty principle, but it's based on a Planck length and as I recall it's most applicable for stuff that is waaaay smaller and lighter than a nucleus, or for things which are moving at closer to relativistic speeds.

I guess that's kind of a me problem, though. :D
I'm not sure why you would think that nuclear reactions aren't governed by quantum physics. The height of the Coulomb barrier is the potential at the radius of the strong/nuclear force potential well and classically there would be a minimum cutoff energy for fusion. Quantum tunneling lets particles with less than that energy through the Coulomb barrier and undergo a fusion reaction. Fusion cross sections have a fairly smooth behavior because the nuclei are simple but the plot show that for the p-B fusion there is a resonance peak. Uranium neutron fission cross sections have a lot of structure to them because of the large complex nucleus with lots of resonance behavior.

The Planck length is very small and is a length scale associated with quantum gravity. We could not observe quantum effects if they only happened on that length scale. Maybe you meant the de Broglie wavelength that has Plank's constant in it. In any case, quantum interference effects and other quantum behavior can be seen at low energy with fairly large molecules and even macroscopic systems with lots of atoms.
 

Shavano

Ars Legatus Legionis
63,778
Subscriptor
Stars have relatively low core temperatures relative to our reactors because their density is high, the confinement time is ~infinite, and they're not that good at getting rid of heat for square cube reasons. The actual power density of the sun is tiny, in the core it's similar to a compost pile (~300 watts per cubic meter).
Never saw that explicitly stated before but it makes sense.

In terms of practical fusion, the ideal situation would be if you could slam a proton or deuteron beam into solid tritium, or something else that will give you a fusion reaction on impact. The solid material solves the confinement problem because there's a much much higher probability that the impact will result in a fusion. But at the temperatures you need to work with, or will be working with a moment after your reactor starts, you won't have solid material. You'll have a much larger volume of vaporized or atomized material. So it doesn't seem like that could be sustained even if materials you could productively react that way exist.
 

Dmytry

Ars Legatus Legionis
10,652
The other thing to note is that for each lucky collision where nuclei fuse, there's a lot of collisions where they just bounce. So even if you could magically collide nuclei head on (I'm saying "magically" because I don't think quantum mechanics even permits such a thing), most collisions will result in your nuclei just bouncing in some random direction after which it can only collide with another nucleus by chance, and would require high density to do so.
 

Megalodon

Ars Legatus Legionis
35,084
Subscriptor++
The other thing to note is that for each lucky collision where nuclei fuse, there's a lot of collisions where they just bounce. So even if you could magically collide nuclei head on (I'm saying "magically" because I don't think quantum mechanics even permits such a thing), most collisions will result in your nuclei just bouncing in some random direction after which it can only collide with another nucleus by chance, and would require high density to do so.

This is why I think the metric the field uses, namely the triple product (time * temperature * density) is a pretty good measure. I'm sure there's areas where it's inaccurate, like that resonance in the p-B11 fusion cross section, but it at least accommodates the fact that those things are to some extent fungible.
 
This is why I think the metric the field uses, namely the triple product (time * temperature * density) is a pretty good measure. I'm sure there's areas where it's inaccurate, like that resonance in the p-B11 fusion cross section, but it at least accommodates the fact that those things are to some extent fungible.
The triple product and the temperature dependent Lawson criterion (energy confinement time * density) are derived from energy dependent cross sections integrated over energy dependent particle distribution functions. The link in the previous sentence describes where the come from.
 
A company in Finland called Steady Energy is developing an SMR for district heating. The LDR-50 is a 50 MWt natural circulation PWR that operates at a low temperature (150 C) and pressure (< 10 bar) compared to power reactors. From the article:
"Decarbonising residential heating in Europe alone is a market with significant growth potential in the hundreds of billions of euros," according to Steady Energy. "Throughout Europe, there are approximately 3500 district heating networks which serve 60 million people, largely powered by fossil fuels. Successful, large-scale decarbonisation of district heating can significantly cut greenhouse gas emissions."
 
A company in Finland called Steady Energy is developing an SMR for district heating. The LDR-50 is a 50 MWt natural circulation PWR that operates at a low temperature (150 C) and pressure (< 10 bar) compared to power reactors. From the article:
That seems like a really good use. No need for all those expensive high pressure vessels/lines/pumps/etc, no expensive turbine set, etc. Presumably it could be hooked up to the existing pipes used by current fossil fueled district heating.
 

demultiplexer

Ars Praefectus
4,009
Subscriptor
A company in Finland called Steady Energy is developing an SMR for district heating. The LDR-50 is a 50 MWt natural circulation PWR that operates at a low temperature (150 C) and pressure (< 10 bar) compared to power reactors. From the article:
I really wonder what the business model is, and if they even thought of that before starting the design. There is just no possible way to produce low-quality heat at a profit with a nuclear reactor, regardless of the type.
 
  • Like
Reactions: bjn

Scotttheking

Ars Legatus Legionis
12,214
Subscriptor++
I really wonder what the business model is, and if they even thought of that before starting the design. There is just no possible way to produce low-quality heat at a profit with a nuclear reactor, regardless of the type.
From article: Steady Energy said it will "plan its business models according to the needs of the customer"
aka, they don’t have one
but seeing it spun out of a local university perhaps it has a subsidy route to replace fossil fuel burning.
 
I really wonder what the business model is, and if they even thought of that before starting the design. There is just no possible way to produce low-quality heat at a profit with a nuclear reactor, regardless of the type.
What if they get paid to store "spent" fuel rods from other reactors and use those to produce the heat?
 
  • Like
Reactions: Bardon

w00key

Ars Tribunus Angusticlavius
6,786
Subscriptor
I really wonder what the business model is, and if they even thought of that before starting the design. There is just no possible way to produce low-quality heat at a profit with a nuclear reactor, regardless of the type.
It doesn't really qualify as a traditional reactor though. Turbine cost, none. Pumps, backup power, x3 sets for safety, gone. It's more like a big RTG, once built you don't really need to do much. Safety is in a secondary pool passively boils off, no pumps, pressure, piping that can go wrong.

I hope that makes it cheap enough to use for low grade energy; the current market rate for heating (< 90C) is €76/GJ so 50MWt generates €13680/hour, or €119M per year. But load factor is probably way lower so who knows...
 

Auguste_Fivaz

Ars Praefectus
4,615
Subscriptor++
https://www.theatlantic.com/science...y-authority-energy-transition-nuclear/674729/
This essay has some insight into the TVA's nuclear program both existing and future. It also covers their plans for PV, battery and wind as well as the infrastructure needed for all of the above.

But the agency completed only seven of a planned 17 reactors—demand for electricity grew slower than forecast—and today, unfinished reactor hulks lie scattered around the Valley. The fiasco left TVA constrained by debt, which still totals nearly $20 billion.
Nevertheless, TVA is proud of its nuclear fleet. Although Georgia Power is expected to bring a new reactor online soon, TVA has been the only U.S. utility to have managed that in the past three decades. It began construction on the two reactors at its Watts Bar plant, in Tennessee, in 1973; mothballed them for years; then completed them in 1996 and 2016. In the first half of 2023, they and the agency’s other reactors helped it generate nearly 60 percent of its kilowatt-hours without emitting carbon—significantly higher than the national average.
and
Like many nuclear engineers these days, he thinks the future lies in small modular reactors, or SMRs. At a site on the Clinch River, TVA is planning the first of what it hopes will be a fleet of 20 or so identical SMRs, using a relatively conventional design. “Our goal is not just to build a plant, but to build a plant that sets the model for the U.S. industry,” Greg Boerschig, one of the engineers running the TVA effort, told me.
If you subscribe, it is a good read with a lot of history on the project over the years and how it is working now. It doesn't seem to cost us taxpayers anything, which, unlike PG&E, is a good thing, even if they are dragging their feet and still rolling coal.
 

mboza

Ars Tribunus Militum
2,607
Subscriptor++
It doesn't really qualify as a traditional reactor though. Turbine cost, none. Pumps, backup power, x3 sets for safety, gone. It's more like a big RTG, once built you don't really need to do much. Safety is in a secondary pool passively boils off, no pumps, pressure, piping that can go wrong.

I hope that makes it cheap enough to use for low grade energy; the current market rate for heating (< 90C) is €76/GJ so 50MWt generates €13680/hour, or €119M per year. But load factor is probably way lower so who knows...
I wonder how much energy you can buffer in the heat network piping for the morning peak.

Other question would be that their key design feature is that the reactor runs at a lower pressure than the district heating, so any leak in the heat exchanger is towards the reactor. But wouldn't that work with any reactor design that had a low pressure loop between the reactor and the heating district? Run a traditional turbine for electricity, cool it with the district heating loop, and you get to some fraction of the reactor as electricity instead of 90C heat.

I saw a presentation once from Aberdeen Heat and Power, using gas turbines for their heat network, running the turbines to match the heat demand, and selling whatever electricity that generated as a by-product.

But €76/GJ is €273/MWh, so seems like you want to take all your power output as heat?
 
https://www.theatlantic.com/science...y-authority-energy-transition-nuclear/674729/
This essay has some insight into the TVA's nuclear program both existing and future. It also covers their plans for PV, battery and wind as well as the infrastructure needed for all of the above.


and

If you subscribe, it is a good read with a lot of history on the project over the years and how it is working now. It doesn't seem to cost us taxpayers anything, which, unlike PG&E, is a good thing, even if they are dragging their feet and still rolling coal.
TVA announced last year that they want to deploy a GEH BWRX-300 SMR at their Clinch River site. The Clinch River site was going to be where a fast breeder demonstration reactor was going to be built until the program was cancelled in the early 1980s. TVA is also interested in the Kairos Power molten salt cooled, pebble bed reactor design.
 
I wonder how much energy you can buffer in the heat network piping for the morning peak.

Other question would be that their key design feature is that the reactor runs at a lower pressure than the district heating, so any leak in the heat exchanger is towards the reactor. But wouldn't that work with any reactor design that had a low pressure loop between the reactor and the heating district? Run a traditional turbine for electricity, cool it with the district heating loop, and you get to some fraction of the reactor as electricity instead of 90C heat.

I saw a presentation once from Aberdeen Heat and Power, using gas turbines for their heat network, running the turbines to match the heat demand, and selling whatever electricity that generated as a by-product.

But €76/GJ is €273/MWh, so seems like you want to take all your power output as heat?
China is using some of its power reactors to supply district heating and industrial steam in addition to electricity. They are also developing a special purpose low temperature and pressure 200 MWt reactor to supply district heating.
 

Megalodon

Ars Legatus Legionis
35,084
Subscriptor++
Wasn't sure where to put this, but I randomly ran into a few references that Lead-208 might be a useful moderator. It's a weak moderator compared to light nuclei, but has a very low neutron capture cross section, lower even than deuterium. Presumably because it is a doubly magic isotope giving it exceptional stability. That suggests the possibility of a lead cooled thermal spectrum reactor, which is potentially handy because it would have a negative void coefficient of reactivity and I would guess also a negative temperature coefficient of reactivity as well, much like light water reactors do. Metal cooled fast reactors have really nasty accident scenarios due to their void coefficient (CDAs).

It would also presumably be able to use lower enrichment than typical molten metal reactors, which are typically fast spectrum, and be able to operate at ambient pressure. This also suggests it would have a lot of wiggle room in exactly what spectrum you chose, because it would be quite a weak moderator. You could go for a somewhat harder spectrum and perhaps get some of the actinide burning properties of fast reactors. Compared to molten salt reactors you avoid the physical stresses and potential problems from using graphite as a moderator, a liquid moderator is always in the most favorable geometry and can't be packed any differently, only expand when it gets hot.

Lead-208 would likely be as or more difficult to enrich from natural isotope abundances as Uranium-235, but apparently it's naturally enriched in thorium deposits as it's the last stop on the thorium decay series.

https://www.hindawi.com/journals/stni/2011/252903/
 
Last edited:

MilleniX

Ars Tribunus Angusticlavius
7,267
Subscriptor++
a liquid moderator is always in the most favorable geometry and can't be packed any differently
Any liquid fuel/moderator fix could still be subject to separation effects, though. Any novel reactor design is going to need a hell of a lot of modeling just to be confident it's safe to build, before anyone even starts building experimental versions.
 

Megalodon

Ars Legatus Legionis
35,084
Subscriptor++
Any liquid fuel/moderator fix could still be subject to separation effects, though. Any novel reactor design is going to need a hell of a lot of modeling just to be confident it's safe to build, before anyone even starts building experimental versions.

That's true, but fortunately there's been enough done with lead cooled reactors that it's not a total mystery.
 
  • Like
Reactions: MilleniX
I haven't read the linked paper, but you need more than a low capture cross section to make a good monitor. The problem with lead is that the atomic number is large which means the fractional energy loss per collision is small. It will take a lot of collisions to slow down to thermal energies which means it has a greater chance of getting captured by U238 resonances and a greater chance of fast neutrons leaking from the reactor. Looking at six factor formula, the terms I am talking about are the resonance escape probability and the fast non-leakage probability. They are both made smaller when the fractional energy loss per collision is smaller which is quantified by the average lethargy gain per scattering event in the formula.

Edit: I'll try to look at the linked paper tonight to see if there is something beyond the basic considerations I listed that make it better than I think it would be.
 
Last edited:
Wasn't sure where to put this, but I randomly ran into a few references that Lead-208 might be a useful moderator. It's a weak moderator compared to light nuclei, but has a very low neutron capture cross section, lower even than deuterium. Presumably because it is a doubly magic isotope giving it exceptional stability. That suggests the possibility of a lead cooled thermal spectrum reactor, which is potentially handy because it would have a negative void coefficient of reactivity and I would guess also a negative temperature coefficient of reactivity as well, much like light water reactors do. Metal cooled fast reactors have really nasty accident scenarios due to their void coefficient (CDAs).

It would also presumably be able to use lower enrichment than typical molten metal reactors, which are typically fast spectrum, and be able to operate at ambient pressure. This also suggests it would have a lot of wiggle room in exactly what spectrum you chose, because it would be quite a weak moderator. You could go for a somewhat harder spectrum and perhaps get some of the actinide burning properties of fast reactors. Compared to molten salt reactors you avoid the physical stresses and potential problems from using graphite as a moderator, a liquid moderator is always in the most favorable geometry and can't be packed any differently, only expand when it gets hot.

Lead-208 would likely be as or more difficult to enrich from natural isotope abundances as Uranium-235, but apparently it's naturally enriched in thorium deposits as it's the last stop on the thorium decay series.

https://www.hindawi.com/journals/stni/2011/252903/
I got a chance to read the linked paper. It's interesting that the abstract talks about a new moderator when the paper shows why it is not a good moderator (the issues I listed in my post before this one) and only looks at it's impact on fast reactor designs. The low capture cross section might make it possible to make a thermal spectrum reactor using a graphite moderator and lead coolant with good fuel temperature and coolant density/temperature feedback properties. That would need to be demonstrated with detailed calculations since the capture cross section of the Pb208 becomes larger than that of C12 at neutron energies greater than ~100eV according to figure 2 in the paper.
 
In the attempt to move the Helion discussion to this thread, here's a preamble that applies to Helion and the other commercial fusion companies.

To engineer a fusion power plant, you first have to have the plasma physics work out so the following assumes that the physics works for the concepts. Having the physics work is not a guarantee that it can become an economically viable power plant. There are many engineering problems that need to be solved to do that. Given that, tokamaks have characteristics that make them harder and less economical to turn into a power plant. That's the reason that most of the commercial fusion concepts with significant funding are based on alternative concepts and are not classical tokamak designs. The funders think that if the plasma physics works out, they can make an economically competitive power plant. All of the commercial magnetic fusion companies are at the stage of working to show that that the physics works for their concept. Some are also working on the engineering needed to turn it into an economically viable power plant ion parallel.

Tokamaks are plasmas with a low ratio of plasma pressure to magnetic pressure (this ratio is called beta), so they need higher field magnets to have the same plasma pressure as the higher beta concepts. The beta in a tokamak is a few percent. The beta in an FRC is > 50%. A Z-pinch has no magnetic field coils at all. This is one of the things that makes FRCs and some other concepts more suitable for the advanced fuels than a tokamak. Spheromaks and reversed-field pinches have more efficient means of current drive and might not require auxiliary heating. Some concepts are amenable to do direct conversion of the fusion thermal energy to electricity without an external thermal cycle. Some do not need superconducting magnets. Those are big costs in tokamaks. The tokamak toroidal geometry is a more complex geometry to build and perform maintenance on than the other designs and it also has more problems with plasma wall interactions than some designs which don't have the plasma close to the wall.

The inertial fusion companies are spending their money mostly on driver development. The lasers at NIF are only about 1% efficient in converting electrical energy to laser energy. The indirect holraum approach causes an additional inefficiency. Commercial plants would need to use direct drive instead of the indirect drive hohlraum approach that NIF used. There are other kinds of high-power lasers being worked on that are close to 10% efficient and one company is using a projectile driver. The repetition rate also needs to be increased. Development of methods to mass produce viable targets is also needed.

The alternative concepts are now being built at a scale larger than the previous government funded programs did to resolve the unexplored physics issues. They will show in the next few years if they can hit their targets. Most of them are targeting commercial demonstration plants earlier than the current plan for ITER to use D-T plasmas. Even if they don't, the attempt was done at a small fraction of the cost of ITER. The funders believe it is a risk worth taking given the potential benefit and profits to be made.
 
Here's some basic background from the fusion power Wikipedia page.

Here's a list of some of the top commercial fusion companies in terms of funding that I've seen in the news. It is not fully inclusive but I've tried to include a broad sample of the different concepts. There are lots of commercial companies. Feel free to add any others you are interested in discussing:

The Commonwealth Fusion Systems (CFS) concept is a compact high magnetic field D-T tokamak that uses REBCO superconducting magnets. The magnets have non-superconducting joins so they can be taken apart to more easily replace the first wall which they consider to be a consumable and not a life of the plant component.

The TAE Technologies concept is a steady state FRC with current drive sustained by neutral beam injection. They are targeting the p-B11 reaction which is the cleanest cycle in terms of radioactivity and fuel supply.

The Helion concept is a pulsed FRC that uses D-He3 and has a direct power conversion cycle that operates in a manner similar to an internal combustion engine using a magnetic piston. It also needs D-D reactions to make the He3 fuel. It uses magnetic adiabatic compression to heat the plasma to high temperatures.

The Zap Energy concept is a pulsed Z-pinch that uses D-T fuel. Z-pinches do not need magnetic field coils.

The General Fusion concept is a pulsed D-T spherical tokamak with a liquid metal wall. It uses movement of a liquid metal wall for compression heating of the plasma.

The Type One Energy concept is a steady state D-T stellarator that uses high temperature superconducting magnets.

The Realta Fusion concept is a steady state D-T tandem mirror that is using REBCO superconducting magnets built for them by CFS.

The First Light Fusion concept is D-T inertial confinement fusion driven by electromagnetic driven solid projectiles.

The Focused Energy concept is D-T inertial confinement fusion driven by lasers and a "fast ignition" proton beam.
 
Last edited:
Let's move on to Helion. I want to start with an old Helion presentation that is work they did under ARPA-E funding. Slide 5 illustrates the power density/cost argument that paulfdietz has been trying to make. Helion has explored using D-T fuel in the past and has decided to use D-He3 instead. Helion should be close to starting up their new machine based on things they have put out in the public. I'll give my responses/opinions on things that have been said in the other thread that I think are based on misconceptions and biases from other concepts being applied to Helion's concept in future posts. Maybe later today or tomorrow.
 

Megalodon

Ars Legatus Legionis
35,084
Subscriptor++
The First Light Fusion concept is D-T inertial confinement fusion driven by electromagnetic driven solid projectiles.

The Focused Energy concept is D-T inertial confinement fusion driven by lasers and a "fast ignition" proton beam.

My take is that inertial confinement has the most promise here because the dependent technologies like power electronics and lasers and target design have more headroom for improvement than magnets for magnetic confinement, barring the development of a new miracle superconductor that can support much stronger fields.

The Tellar-Ulam design demonstrates you can use a point energy source to do symmetric compression, so fundamentally using clever targets and a simple driver as First Light proposes to do seems like a good approach. I wasn't familiar with Focused, but their approach seems like a good way to mitigate Rayleigh–Taylor instabilities (the tendency for the interface between fluids of different densities to become turbulent, which makes a clean implosion more difficult). They do less compression but more heating, similar to the "spark plug" in a Tellar-Ulam thermonuclear bomb.

The other thread had some discussion of space propulsion here and I think First Light's approach merits a mention there as well, because if you can get a fusion pulse from a projectile impact on one side of a target, you can probably have that impact happen outside the ship next to a pusher plate, and hence do nuclear pulse propulsion (aka Project Orion). And if you've got a fusion pulse you can use the x-rays from that to drive a two-stage device, but crucially without a fission primary. That means arbitrarily low yield (tons TNT equivalent rather than kilotons) which makes the structural and shock handling problems easier, no fission products, reduced (although perhaps not eliminated) electromagnetic effects, and perhaps reduced proliferation concerns because such pulse units would not be actual bombs, there's no way to get a reaction out of them without having them within a few meters of a huge driver apparatus which is unsuitable for a weapon. Not sure how the neutronics would work out, whether you'd get enough conversion n+Li->T+He in such a small device, but it might be possible to keep some of the fuel as lithium deuteride like they do in bombs, which would be convenient because it's a dense solid at room temperature without any radioactive half-life to worry about.
 

demultiplexer

Ars Praefectus
4,009
Subscriptor
Since we can run the gamut a bit more freely, there's a kind of inertial fusion energy that really gets no love and I'm a little bit amazed that it's not being pursued, since this requires no new technology to work and a lot of the basic legwork has been done and shown to be viable

Famously, project PACER tried to do this and in fact succeeded to demonstrate the scaling and cost opportunities, both of which are orders of magnitude better than anything fission and fusion that has followed. The downside, of course, is that you're making a big stockpile of bombs.

Bombs that can be pretty small and, funnily, require less thick containment vessels and less advanced containment technology than even continuous fission reactors. Where currently fusion targets run from ~$1-10k for a tiny amount of fuel, an entire 10kt fusion bomb could (at the time of PACER) be made for about $20k with a theoretical fission bomb equivalent being in the hundreds of dollars. Even when correcting for purchasing power, the orders of magnitude extra energy released more than compensates the slightly higher cost.

Modern academic research points at the possibility to make hybrid fission-fusion bombs or even conventional explosive-activated partial fusion bombs as small as kilograms of TNT equivalent. They're super inefficient to use, but it shows the ability to scale in this space with at least theoretically low cost input streams. And more importantly: making power using bombs that have no military purpose and that can be thoroughly controlled by IAEA observers. That is actually contrary to the serious proliferation risk of tritium breeding in continuous fusion concepts.

I'll conveniently leave off all the downsides to this method much like UserJoe likes to selectively and naively look at fusion concepts ;-) (meant as a playful dig, I'm not mad).
 

Megalodon

Ars Legatus Legionis
35,084
Subscriptor++
Famously, project PACER tried to do this and in fact succeeded to demonstrate the scaling and cost opportunities, both of which are orders of magnitude better than anything fission and fusion that has followed. The downside, of course, is that you're making a big stockpile of bombs.

Bombs that can be pretty small and, funnily, require less thick containment vessels and less advanced containment technology than even continuous fission reactors. Where currently fusion targets run from ~$1-10k for a tiny amount of fuel, an entire 10kt fusion bomb could (at the time of PACER) be made for about $20k with a theoretical fission bomb equivalent being in the hundreds of dollars. Even when correcting for purchasing power, the orders of magnitude extra energy released more than compensates the slightly higher cost.

Something I thought was interesting that might be relevant: https://www.nature.com/articles/253525a0

If you can achieve ICF-like compression with a fissile core, critical mass can be less than a gram. Less fission, less fission projects, and you can do more with fusion if you use it as the primary in a two stage device. With something like that magnetically accelerated projectile, that kind of compression might be doable, and like fusion targets it's useless as a bomb because it can't detonate without a huge power system.
 

paulfdietz

Ars Scholae Palatinae
749
Since we can run the gamut a bit more freely, there's a kind of inertial fusion energy that really gets no love and I'm a little bit amazed that it's not being pursued, since this requires no new technology to work and a lot of the basic legwork has been done and shown to be viable

Famously, project PACER tried to do this and in fact succeeded to demonstrate the scaling and cost opportunities, both of which are orders of magnitude better than anything fission and fusion that has followed. The downside, of course, is that you're making a big stockpile of bombs.

It is my understanding that PACER did not demonstrate competitive cost. Indeed, I believe cost analysis is why the program was abandoned.
 

paulfdietz

Ars Scholae Palatinae
749
Wasn't sure where to put this, but I randomly ran into a few references that Lead-208 might be a useful moderator. It's a weak moderator compared to light nuclei, but has a very low neutron capture cross section, lower even than deuterium.

Lead is interesting in that it actually is quite good at moderating energetic neutrons. Not by elastic collisions like neutrons on hydrogen, but by inelastic nuclear collisions. That is, the neutron scatters off the lead nucleus, but leaves the lead nucleus in an excited state (and the neutron losing the energy that went into that excitation). The nucleus then deexcites either by emission of a gamma ray photon or (if the excitation is energetic enough) by evaporation of another neutron. The latter is not likely for fission spectrum neutrons, but would be important for 14 MeV neutrons from DT fusion.

Once the neutron's energy is degraded below the threshold for exciting the lead nucleus lead becomes almost null as a moderator, although it can still scatter neutrons, so it can act as a reflector. Neutron shielding around accelerators will have a lead layer backed up by a moderator like boron-doped plastic, and perhaps a final lead layer to catch gamma rays.
 
  • Like
Reactions: Megalodon
My take is that inertial confinement has the most promise here because the dependent technologies like power electronics and lasers and target design have more headroom for improvement than magnets for magnetic confinement, barring the development of a new miracle superconductor that can support much stronger fields.
It's hard to get high power, high repetition rate, and high reliability but it is possible to do a lot better than NIF and there is progress being made. There were already lasers that had a lot higher efficiency than the NIF lasers when those lasers were chosen. The Naval Research Laboratory (NRL) was using KrF lasers in their laser fusion tests back in the late 80's. They were always a proponent of direct drive and always seemed more interested in energy production than DOE (NIF) was. They have also made advances in uniformity of the beam illumination on the target and repetition rate. The latest and greatest gas lasers seem to be ArF.
 
Last edited:
  • Like
Reactions: continuum
Since we can run the gamut a bit more freely, there's a kind of inertial fusion energy that really gets no love and I'm a little bit amazed that it's not being pursued, since this requires no new technology to work and a lot of the basic legwork has been done and shown to be viable

Famously, project PACER tried to do this and in fact succeeded to demonstrate the scaling and cost opportunities, both of which are orders of magnitude better than anything fission and fusion that has followed. The downside, of course, is that you're making a big stockpile of bombs.

Bombs that can be pretty small and, funnily, require less thick containment vessels and less advanced containment technology than even continuous fission reactors. Where currently fusion targets run from ~$1-10k for a tiny amount of fuel, an entire 10kt fusion bomb could (at the time of PACER) be made for about $20k with a theoretical fission bomb equivalent being in the hundreds of dollars. Even when correcting for purchasing power, the orders of magnitude extra energy released more than compensates the slightly higher cost.

Modern academic research points at the possibility to make hybrid fission-fusion bombs or even conventional explosive-activated partial fusion bombs as small as kilograms of TNT equivalent. They're super inefficient to use, but it shows the ability to scale in this space with at least theoretically low cost input streams. And more importantly: making power using bombs that have no military purpose and that can be thoroughly controlled by IAEA observers. That is actually contrary to the serious proliferation risk of tritium breeding in continuous fusion concepts.

I'll conveniently leave off all the downsides to this method much like UserJoe likes to selectively and naively look at fusion concepts ;-) (meant as a playful dig, I'm not mad).
I talked to a guy that had an updated and improved version of PACER back in the late 80s in a poster session at a fusion meeting. It's even crazier today than it was back then. Do you really think anyone would approve of HEU or Plutonium bombs out in circulation? The guy was a serious fusion scientist and has his name on a lot of the mirror fusion and direct energy conversion stuff that came out of LLNL. I think the PACER thing was a hobby project for him.
 

demultiplexer

Ars Praefectus
4,009
Subscriptor
I talked to a guy that had an updated and improved version of PACER back in the late 80s in a poster session at a fusion meeting. It's even crazier today than it was back then. Do you really think anyone would approve of HEU or Plutonium bombs out in circulation? The guy was a serious fusion scientist and has his name on a lot of the mirror fusion and direct energy conversion stuff that came out of LLNL. I think the PACER thing was a hobby project for him.
Nah, it's never going to be a real thing politically because bombs, but both from a cost and proliferation perspective it's arguably the best possible nuclear technology. It's one of those things that seem crazy at first and second thought, but once you dig into it, it's surprisingly great. Compared to a year of fuel on hand, only having manufacturing of a week or two of fuel on-site is at least an order of magnitude less material and less spicy stuff to begin with. And making bombs is considerably cheaper than making fuel rods and maintaining reprocessing facilities, not to mention a hot/moderated primary circulation.

In an alternative universe, this may have been our primary nuclear energy source. And space propulsion.