-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Truncate #11
base: main
Are you sure you want to change the base?
Truncate #11
Conversation
- Tests only ensure that the output is a valid tensor train. They do not check for numerical accuracy. - I also changed the _fast_from method to work with TT. In particular, the change is made such that copy works with tt.
Pull Request Test Coverage Report for Build 1719059410Details
💛 - Coveralls |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The implementation is really readable -- thank you for the great ASCII pictures in the code. Those were very helpful.
Maybe we could check the mathematical correctness by contracting the whole network at the end and comparing the large singular values of the original and truncated networks?
Regarding the direction of truncation: To me it seems more "natural" to contract the node with the singular values to the left when proceeding to the right as we currently do. That leaves the singular values more distributed within the train, rather than all in the right most node as currently happens. I understand that doesn't leave the network in a canonical form though. I leave the choice of whether to keep this as is for now up to you. It sounded from our conversation this morning that the currently implementation is the right one for the examples you've tried so far.
In the long run I guess it would make sense to support both moving left or right through the train and contracting the singular values either left or right, so that users could pick whichever option made the most sense to them (but I definitely don't know how people use these truncations in practice yet, so please take this idea with a pitch of salt).
Co-authored-by: Simon Cross <[email protected]>
Co-authored-by: Simon Cross <[email protected]>
Since we do not ensure that nodes in edges are sorted, we can not rely on node1 and node2 to get the nodes.
…stead of right node.
It now returns a list of list with the truncated singular values for each node.
`truncate` now returns a list of lists with the truncated singular values.
…twork into truncate
Previously, random nodes contained only real values. Now the contain scalar values.
I ended up changing the algorithm to contract the singular values to the left instead of to the right. I think it is now easier to create a test for numerical accuracy so I will do so tomorrow. After that it should be ready to be merged. |
This should not affect the value of singular values nor its truncation but it does make it easier to test the function.
Added on more tests that check for numerical correctness of the algorithm. It does so by using an mpo that has nodes that are diagonal and hence it is easier to infer the resulting tensor train from the truncation. Let me know if you are happy with this @hodgestar and I will merge it. |
This PR includes a new method
truncate
together with some small fixed for the tensor train class.A few notes on this method. Tests currently ensure that the output makes sense as a tensor train. They also ensure that the bond dimension o the output has been truncated. I have not been able to design a test that numerically checks for the accuracy of the truncation step though. The issue here is that I could not think of an easy analytical example to compare to. Also the "error" returned does not accurately reproduce (and it should not) the expected error obtained from the norm of
truncated_state - state
. Which makes it difficult to relate those two values in a test.The tensor train is left in a right normalized state. This means that the nodes are now unitary matrices (when shaped correctly). However, I am worried that a truncation with only a right normalization may not be enough. I am worried in particular for the case of using max_trunc_error. The point is that after truncating it for the first time, the all nodes are unitary and hence have a degenerate set of singular values equal to one. I am wondering how another truncation will affect the normalized state. The point is that if you have to get rid of a few singular values but all are the same... how do you know which one is bet to get rid of? I guess this should never happen in practice as you should only truncate the tensor train once the bond dimension has increased and hence the singular values will probably not be degenerated any-more. I guess this will come out again once matmul is included. For the moment lets just merge as it is and we will improve it later if needed.
Changelog: