
Registered since September 28th, 2017
Has a total of 4246 bookmarks.
Showing top Tags within 4 bookmarks
howto information development guide reference administration design website software solution service product online business uk tool company linux code server system application web list video marine create data experience description tutorial explanation technology build blog article learn world project boat download windows security lookup free performance javascript technical network control beautiful support london tools course file research purchase library programming image youtube example php construction html opensource quality install community computer profile feature power browser music platform mobile work user process database share manage hardware professional buy industry internet dance advice installation developer 3d search access camera customer travel material test standard review documentation css money engineering webdesign engine develop device photography digital api speed source program management phone discussion question event client story simple water marketing app content yacht setup package fast idea interface account communication cheap compare script study market easy live google resource operation startup monitor training
Tag selected: occur.
Looking up occur tag. Showing 4 results. Clear
Saved by uncleflo on December 23rd, 2018.
A central question in text mining and natural language processing is how to quantify what a document is about. Can we do this by looking at the words that make up the document? One measure of how important a word may be is its term frequency (tf), how frequently a word occurs in a document. There are words in a document, however, that occur many times but may not be important; in English, these are probably words like “the”, “is”, “of”, and so forth. We might take the approach of adding words like these to a list of stop words and removing them before analysis, but it is possible that some of these words might be more important in some documents than others. A list of stop words is not a sophisticated approach to adjusting term frequency for commonly used words. Another approach is to look at a term’s inverse document frequency (idf), which decreases the weight for commonly used words and increases the weight for words that are not used very much in a collection of documents. This can be combined with term frequency to calculate a term’s tf-idf, the frequency of a term adjusted for how rarely it is used. It is intended to measure how important a word is to a document in a collection (or corpus) of documents. It is a rule-of-thumb or heuristic quantity; while it has proved useful in text mining, search engines, etc., its theoretical foundations are considered less than firm by information theory experts.
quantify tidy calculate corpus document words frequency calculating verbs numerical examine occur text weight quantity approach mining collection keyword tag analyse development howto data principle useful technical analysis developer code explanation article
Saved by uncleflo on February 22nd, 2015.
A common way to check if a point is in a triangle is to find the vectors connecting the point to each of the triangle's three vertices and sum the angles between those vectors. If the sum of the angles is 2*pi then the point is inside the triangle, otherwise it is not. It works, but it is very slow. This text explains a faster and much easier method.
opengl intersection triangle 3d plane ray algorithm normal suggestion research occur surface implementation cpp development howto article reference solution explanation
Saved by uncleflo on February 22nd, 2015.
There are various ray-tri tests around. I'll describe a few of the most interesting ones because I think that there's no "definitive" ray-tri intersection. Much depends on the type of application you're going to develop. Some methods are faster if most of the tests are positive (hit), some if they are not (early rejection). Some use lots of memory to precalculate as much as possible so they are very fast for scenes with no cache-hit problems (for examples in scenes with not so many, big triangles and spatial subdivision), while others use less memory and no precalc. etc... The precalculation option is often important so I'll talk about it more here. There are two kinds of precalculations that could be useful: per-frame precalculation (something that can be precalculated for the entire scene, for every triangle, and that just needs to be updated if the triangles are animated, if it changes their position in the next frame). The other type is per-ray-bundle precalculation.
opengl intersection triangle 3d plane ray algorithm normal suggestion research occur surface implementation cpp development howto article reference solution explanation
Saved by uncleflo on February 22nd, 2015.
The intersection of the most basic geometric primitives was presented in the Algorithm 5 about Intersections of Lines and Planes. We will now extend those algorithms to include 3D triangles which are common elements of 3D surface and polyhedron models. We only consider transversal intersections where the two intersecting objects do not lie in the same plane. Ray and triangle intersection computation is perhaps the most frequent nontrivial operation in computer graphics rendering using ray tracing. Because of its importance, there are several published algorithms for this problem (see: [Badouel, 1990], [Moller & Trumbore, 1997], [O'Rourke, 1998], [Moller & Haines, 1999]). We present an improvement of these algorithms for ray (or segment) and triangle intersection. We also give algorithms for triangle-plane and triangle-triangle intersection.
opengl intersection triangle 3d plane ray algorithm normal suggestion research occur surface implementation cpp development howto article reference solution explanation
No further bookmarks found.