You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The idea behind unpacking only is relevant for the standalone jar distribution of RapidWright. The standalone jar contains the essential data files so that for the short list of device models it comes with, it would not have to reach out over the Internet to download anything (hence, it can "stand alone"). When RapidWright is running from the standalone jar and detects that it has already expanded the files contained, it won't unpack them again. However, if the user tries to download additional device files, it will initiate a download from the Internet for the missing file and put it in the corresponding directory.
Since the standalone jar contains the compiled *.class files, there isn't a way to get the latest changes from GitHub and so the release is essentially locked to that set of changes.
But what if a user uses standalone_v1.jar and then later uses standalone_v2.jar -- any new/updated files in v2 would not be unpacked, and a user would have to blow away the getExecJarStoragePath() path to get all those new goodies?
Yes, that is correct. getExecJarStoragePath() used to be just a sibling directory to the jar so it would have been more or less obvious to delete. But now it exists in a location central to the user's home account, so it is not as obvious. Perhaps we could add a special "unpacked" flag file that is unique to each standalone jar released that could quickly be checked to see if the unpack method should be run.
RapidWright/src/com/xilinx/rapidwright/util/FileTools.java
Lines 1680 to 1683 in 6cf64a8
Would this not mean that new files would not get unpacked, nor old files updated?
The text was updated successfully, but these errors were encountered: