You would think you thinking is correct, but it actually it isn't.
I have worked on many, many sites and some URL that do not have any physical links to them (or none we were aware of) always managed to get indexed in Google.
Who knows where Google finds the links, but invariably it does. So this is something you should definitely fix.
IF you can 301 redirect the duplicate pages to a single URL that would be the best fix, or if you need the duplicate URL to be live for what ever reason, set a canonical tag on the duplicate URL referencing a single URL.
<link rel="canonical" href="http://example.com/special/path/post-name" />
If for some reason you cannot set a canonical tag, you can set the robots meta tag to noindex them.
In the header section of the page:
<META NAME="ROBOTS" CONTENT="NOINDEX, FOLLOW">
Or in the HTTP header
HTTP/1.1 200 OKDate: Tue, 25 May 2010 21:42:43 GMT(…)X-Robots-Tag: noindex(…)
And as a very last resort, if you could not implement any of the above, you can block them in your robots.txt file, using something like:
Disallow: /category/