Am I correct in thinking that the process network has to be a DAG: any loops would make it impossible to draw a cut? In turn, this would have an impact on process analysis for situations like NHS where:
a/ the process for a presenting individual cannot be defined a priori
b/ Some node types, e.g. imaging, lab analysis, could appear many times on any given route
c/ 'capacity' could have a large variation, depending on individuals' situations.
I think it's still a useful framework for such situations, but the limitations and changes to approach need to be understood
It's a good question! I basically did the simple version, where everything is going "forward" in some sense (I was already up to 2000+ words as it was!).
In the more general case you can have edges coming back across the cut, and it's a matter of accounting as to how you consider that - basically, as you might expect, you want these backwards edges to not have any flow, and so backwards edges don't contribute to the value of the cut. But it just gets a bit fussy to deal with.
(And as you say on LinkedIn as well, people are much more variable than widgets, so of course it's a bit of a toy model in this case!)
I don't think that the ppl variability is necessarily an issue - it should be possible to separate out the process vs input variability, iff the processes are well documented/use verified. However, current nhs work seems to ignore this step, so, I think, the only conclusion that most analyses can come up with is "we need better data for the processes".
It's interesting looking at the ScHARR/Warwick review of BPR at Leicester mentioned in one of the other comments. It dates from a project that started in 1992 (ie before BPR was well established). That's a long time ago and one might have expected better performance/understanding of how to improve performance by now.
Regarding your bit on the NHS, it's worth reading the evaluation of a programme that sought to do just what you described; taking a process and breaking it down to its components to reduce steps / improve efficiency etc. there's an online copy at
Cool, thanks! Of course I realise that a lot of this stuff is going on in practice, and I hope I'm not reinventing the wheel, but I think it's interesting to see that all these different things live in this common framework.
Regarding the discussions about the National Grid and the problems of distribution of "Green Energy" sources. I too share your skepticism about whether the targets for a Green transition can easily be met. However one thing I've been pondering is what will be the impact of a large percentage of electric cars being capable of Vehicle to Home (V2H) or Vehicle to Grid (V2G) reverse delivery of electric charge? In such a scenario, provided it could be easily controlled by the network via appropriate software, then the electric cars which are a substantial part of the problem, also become a distributed storage capacity over the whole grid network. I confess it's beyond my understanding of the technology and modelling of the consequences to know whether this feasibly reduces some of the problems with grid capacity and design?
I think it might help a bit in this sense, particularly if some of it is coming from e.g. solar chargers on roofs, because it's adding a degree of flexibility to the system, but I'm not sure how much modelling there is of that
Maybe. At the expense of your car not having sufficient charge when you need it sometimes. Or the additional charge/discharge cycles reducing effective battery life.
Yes that's an additional set of factors that need to be incorporated within the modelling and obviously which need to be considered in the software control. However my understanding is that the lifetime of the batteries is improved if you usually cycle between 80% and 20% of full capacity and by minimising the use of rapid charging. For many users they may not be using the full capacity of their vehicles battery on a daily basis so it doesn't need to be fully charged each day. It ought to be possible to build into an app the ability to enter expected vehicle usage and allow it to then utilise additional capacity for negotiation with your chosen homes energy provider. This is already possible for people who have solar panels and home battery storage fitted, where you can take advantage of low energy tariff periods to charge the home battery and then use the stored charge to power your home or sell back to the grid when the tariffs are higher. All that's needed is to be able to include the EV cars battery storage into this system.
In other words, at a great deal more complexity, expense and inconvenience over conventional power generation/distribution. Sounds like substantial living standards regression to me.
Yes, but too many models of bottlenecks tend to use examples of linear production lines where the bottleneck is trivially obvious. Spotting the bottleneck in a network (graph) is a lot harder.
You basically need graph maths, like how social network analysis uses graph theory to calculate how information (and power) flows through groups of people.
Most of it starts with calculating every possible pathway through the network, and then looking at min/max/sum/weighted average etc.
The "cut theory" is basically saying that max value = sum of the capacity/strength of the first degree edges. Or at least that's the way I interpreted it.
The other thing that I didn't really get into is that the max-flow min-cut result comes with a constructive way of achieving the maximal flow in "most" cases - the Ford-Fulkerson algorithm itself .. by basically pumping more flow in along edges that aren't at capacity until you can't do any more https://en.wikipedia.org/wiki/Ford%E2%80%93Fulkerson_algorithm
In Manufacturing Systems (although it is arguable that this approach was already known and implemented) the book "The Goal" by Eliyahu M. Goldratt clearly expressed this idea. Although his Theory of Constraints really became a hook on which to hang consultancy services.
Apologies for belated comment. This is a very nice illustration of von Neumann minimax and Kuhn-Tucker duality (incl complementary slackness) in a linear programming problem in which both primal and dual solutions are easily seen. (Think of the routes as all being through bandit country. All the value-added of getting units from factory to port can be extracted by bandits on the edges crossed by the cut; no ransoms can be extracted in other edges.)
Yes, though I think that's probably a notch or three of difficulty above what I can get away with at this venue! (I also think of it as Blockbusters - either there's a "horizontal" path from source to sink along which flow can be improved, or there's a "vertical" route formed by saturated edges)
Thanks for this, I never really thought of this subject in this way, or that it had mathematical treatment only a recently as the 1950s, the words I would use for all this would be a combination of words like process optimization, operations research and critical path determination using PERT and GANTT charts.
Sure, and those are all good words! But certainly where I'm sitting operations research (including this kind of stuff, as well as game theory and optimization in general) is a branch of maths, even if its insights end up getting applied in fields like engineering and management.
I really enjoyed reading this and suspect I will need to read it a few more times to fully get it.
I was confused that the line on the left coming from the factory, with a capacity of 6, was not counted towards the value of the cut. So am I right in saying that the cut has two characteristics: (1) it indicates the cut value, and (2) it defines the set of things required to achieve the cut value, even if they don't directly factor into that value?
The value of the cut is just formed by the edges crossing it - stuff going on to one side or another isn't really relevant. You can see this by looking at the edge you mention: sure, you can get a flow of 6 down it, but only 4 of those (1+3) have anywhere to go afterwards, so you can't run traffic on that link at its full capacity.
Thanks for responding! Right, but I suppose my point was: to achieve that flow of 6, you still have some reliance on using that line, even at reduced capacity. Or is that not relevant?
Tom Körner has a discussion of Ford-Fulkerson (and Braess's paradox) in Chapter 11 of his most excellent book "The Pleasures of Counting". In the preface he claims that the book is "for able school children of 14 and over", which makes me think that he must meet a different selection of 14 year olds than the ones I come across in New Zealand.
I found it confusing how you chose the cut line, b/c I was reading quickly and the line from the factory had a capacity of 6, which drew my attention from the other lines whose total capacity was also 6. Might have been clearer if that first line had a capacity of 5 or 7.
Regardless, very cool concept and very good explanation.
Factorio the computer game has a lot of this type of problem solving in the game. Typically, there are a large number of resource chains that need to be produced and oversupply in one area does not help until you fix the bottleneck in the chain. There is an optimal strategy to the ratios of what to build with an overall constrain of your computer’s cpu power.
Also, you can apply this process to emergency rooms to see what you need to do to free up a number of beds in the shortest amount of time to increase the processing capacity. In the old days, the ER Nurse Manager on duty would look at the board of patients on the way with the ER Doctor and say “What do we need to do to get these people out of here?” This would involve a discussion of what is needed for next step, this person is going to be admitted, this person is waiting on labs, Doctor needs to sign a discharge order and so on. Most hospitals in the US now have a discharge planning for inpatients but normally they don’t have that team in the ER so it falls on the nurse manager who is overworked. I’m not sure a lot of the advancements in the US healthcare system have made it to the UK due to it being a government service that does not have to be profitable.
Am I correct in thinking that the process network has to be a DAG: any loops would make it impossible to draw a cut? In turn, this would have an impact on process analysis for situations like NHS where:
a/ the process for a presenting individual cannot be defined a priori
b/ Some node types, e.g. imaging, lab analysis, could appear many times on any given route
c/ 'capacity' could have a large variation, depending on individuals' situations.
I think it's still a useful framework for such situations, but the limitations and changes to approach need to be understood
It's a good question! I basically did the simple version, where everything is going "forward" in some sense (I was already up to 2000+ words as it was!).
In the more general case you can have edges coming back across the cut, and it's a matter of accounting as to how you consider that - basically, as you might expect, you want these backwards edges to not have any flow, and so backwards edges don't contribute to the value of the cut. But it just gets a bit fussy to deal with.
(And as you say on LinkedIn as well, people are much more variable than widgets, so of course it's a bit of a toy model in this case!)
I don't think that the ppl variability is necessarily an issue - it should be possible to separate out the process vs input variability, iff the processes are well documented/use verified. However, current nhs work seems to ignore this step, so, I think, the only conclusion that most analyses can come up with is "we need better data for the processes".
It's interesting looking at the ScHARR/Warwick review of BPR at Leicester mentioned in one of the other comments. It dates from a project that started in 1992 (ie before BPR was well established). That's a long time ago and one might have expected better performance/understanding of how to improve performance by now.
Regarding your bit on the NHS, it's worth reading the evaluation of a programme that sought to do just what you described; taking a process and breaking it down to its components to reduce steps / improve efficiency etc. there's an online copy at
https://www.elft.nhs.uk/sites/default/files/import-news/Re-engineering_Leicester_Royal_Infirmary.pdf
Cool, thanks! Of course I realise that a lot of this stuff is going on in practice, and I hope I'm not reinventing the wheel, but I think it's interesting to see that all these different things live in this common framework.
Regarding the discussions about the National Grid and the problems of distribution of "Green Energy" sources. I too share your skepticism about whether the targets for a Green transition can easily be met. However one thing I've been pondering is what will be the impact of a large percentage of electric cars being capable of Vehicle to Home (V2H) or Vehicle to Grid (V2G) reverse delivery of electric charge? In such a scenario, provided it could be easily controlled by the network via appropriate software, then the electric cars which are a substantial part of the problem, also become a distributed storage capacity over the whole grid network. I confess it's beyond my understanding of the technology and modelling of the consequences to know whether this feasibly reduces some of the problems with grid capacity and design?
I think it might help a bit in this sense, particularly if some of it is coming from e.g. solar chargers on roofs, because it's adding a degree of flexibility to the system, but I'm not sure how much modelling there is of that
Maybe. At the expense of your car not having sufficient charge when you need it sometimes. Or the additional charge/discharge cycles reducing effective battery life.
Yes that's an additional set of factors that need to be incorporated within the modelling and obviously which need to be considered in the software control. However my understanding is that the lifetime of the batteries is improved if you usually cycle between 80% and 20% of full capacity and by minimising the use of rapid charging. For many users they may not be using the full capacity of their vehicles battery on a daily basis so it doesn't need to be fully charged each day. It ought to be possible to build into an app the ability to enter expected vehicle usage and allow it to then utilise additional capacity for negotiation with your chosen homes energy provider. This is already possible for people who have solar panels and home battery storage fitted, where you can take advantage of low energy tariff periods to charge the home battery and then use the stored charge to power your home or sell back to the grid when the tariffs are higher. All that's needed is to be able to include the EV cars battery storage into this system.
In other words, at a great deal more complexity, expense and inconvenience over conventional power generation/distribution. Sounds like substantial living standards regression to me.
Applications can learn from history of use and do a better charge accordingly, so you only need to enter unusual usage.
Is this just another way of describing the process of identifying bottlenecks?
Yes, but too many models of bottlenecks tend to use examples of linear production lines where the bottleneck is trivially obvious. Spotting the bottleneck in a network (graph) is a lot harder.
You basically need graph maths, like how social network analysis uses graph theory to calculate how information (and power) flows through groups of people.
Most of it starts with calculating every possible pathway through the network, and then looking at min/max/sum/weighted average etc.
The "cut theory" is basically saying that max value = sum of the capacity/strength of the first degree edges. Or at least that's the way I interpreted it.
The other thing that I didn't really get into is that the max-flow min-cut result comes with a constructive way of achieving the maximal flow in "most" cases - the Ford-Fulkerson algorithm itself .. by basically pumping more flow in along edges that aren't at capacity until you can't do any more https://en.wikipedia.org/wiki/Ford%E2%80%93Fulkerson_algorithm
In Manufacturing Systems (although it is arguable that this approach was already known and implemented) the book "The Goal" by Eliyahu M. Goldratt clearly expressed this idea. Although his Theory of Constraints really became a hook on which to hang consultancy services.
Interesting, thanks - I hadn't heard of that book, it definitely looks like it's worth me taking a look at it!
The Goal is well worth reading. I thought of it immediately when I started reading this post.
You beat me to it... This is essentially Goldratt"s messages from The Goal and The Critical Chain expressed as a graph.
An alternative way of looking at it - again from manufacturing - is that you can never regain time lost at a bottleneck.
Apologies for belated comment. This is a very nice illustration of von Neumann minimax and Kuhn-Tucker duality (incl complementary slackness) in a linear programming problem in which both primal and dual solutions are easily seen. (Think of the routes as all being through bandit country. All the value-added of getting units from factory to port can be extracted by bandits on the edges crossed by the cut; no ransoms can be extracted in other edges.)
Yes, though I think that's probably a notch or three of difficulty above what I can get away with at this venue! (I also think of it as Blockbusters - either there's a "horizontal" path from source to sink along which flow can be improved, or there's a "vertical" route formed by saturated edges)
Which illustrates Farkas’ lemma?
Nicely put…it’s fascinating to read the outcomes of someone’s thinking, and the result of how they have spent their time…
Thanks for this, I never really thought of this subject in this way, or that it had mathematical treatment only a recently as the 1950s, the words I would use for all this would be a combination of words like process optimization, operations research and critical path determination using PERT and GANTT charts.
Sure, and those are all good words! But certainly where I'm sitting operations research (including this kind of stuff, as well as game theory and optimization in general) is a branch of maths, even if its insights end up getting applied in fields like engineering and management.
Very nice. Peter Senge has a nice discussion in The Fifth Discipline, though not the maths.
I really enjoyed reading this and suspect I will need to read it a few more times to fully get it.
I was confused that the line on the left coming from the factory, with a capacity of 6, was not counted towards the value of the cut. So am I right in saying that the cut has two characteristics: (1) it indicates the cut value, and (2) it defines the set of things required to achieve the cut value, even if they don't directly factor into that value?
The value of the cut is just formed by the edges crossing it - stuff going on to one side or another isn't really relevant. You can see this by looking at the edge you mention: sure, you can get a flow of 6 down it, but only 4 of those (1+3) have anywhere to go afterwards, so you can't run traffic on that link at its full capacity.
Thanks for responding! Right, but I suppose my point was: to achieve that flow of 6, you still have some reliance on using that line, even at reduced capacity. Or is that not relevant?
Tom Körner has a discussion of Ford-Fulkerson (and Braess's paradox) in Chapter 11 of his most excellent book "The Pleasures of Counting". In the preface he claims that the book is "for able school children of 14 and over", which makes me think that he must meet a different selection of 14 year olds than the ones I come across in New Zealand.
Ha ha! He is very good (he lectured me as a student) but there is always a bit of a question about calibration of levels with some of this stuff!
I found it confusing how you chose the cut line, b/c I was reading quickly and the line from the factory had a capacity of 6, which drew my attention from the other lines whose total capacity was also 6. Might have been clearer if that first line had a capacity of 5 or 7.
Regardless, very cool concept and very good explanation.
Factorio the computer game has a lot of this type of problem solving in the game. Typically, there are a large number of resource chains that need to be produced and oversupply in one area does not help until you fix the bottleneck in the chain. There is an optimal strategy to the ratios of what to build with an overall constrain of your computer’s cpu power.
Also, you can apply this process to emergency rooms to see what you need to do to free up a number of beds in the shortest amount of time to increase the processing capacity. In the old days, the ER Nurse Manager on duty would look at the board of patients on the way with the ER Doctor and say “What do we need to do to get these people out of here?” This would involve a discussion of what is needed for next step, this person is going to be admitted, this person is waiting on labs, Doctor needs to sign a discharge order and so on. Most hospitals in the US now have a discharge planning for inpatients but normally they don’t have that team in the ER so it falls on the nurse manager who is overworked. I’m not sure a lot of the advancements in the US healthcare system have made it to the UK due to it being a government service that does not have to be profitable.
I’d relocate the factory to the Port site!
One thing I would add at the start was it’s 1 widget per train.. very interesting article
nicely done!