-
Posts
10683 -
Joined
-
Which is why I tend not to concern myself over whether something will be perceived to be condescending in a text medium. That has everything to do with the reader, and virtually nothing to do with the text.
-
I was a little concerned about the Time Saver badge, having never tested that arc in beta and noticing there's a time limit on it. So I asked Snow Globe about that badge and he told me its just six bosses you have to kill; with incarnate pets it should be no problem.
Well, no matter how hard the devs make a boss, I can take six bosses, so I figure with a reasonable amount of inspirations this should be not too difficult.
So I get to the Tempest, cast my lore pets, pop all my lucks and reds, then talk to Tempest, say I'm going to save both, and then the assassin's all turn to me, and I Build Up and Aim, and then open fire with Energy Torrent and Explosive Blast...
And then I realized I had bosses turned off. And the mission honors that setting.
So if anyone is wondering if Time Saver is a hard badge to get, apparently a +3 Energy Blaster with an extra +100% damage and incarnate pets can get that badge in about four seconds. I think it'll be weeks before some of Tempest's buddies deorbit. -
-
Quote:I know its once every five to ten seconds, if attacked, but I do not have the exact answer handy. I tested this a long time ago and I want to say it was approximately eight seconds on average, but that's going from a very old memory.I'm loving my new Dark/Dark dom, but I was wondering about Fearsome Stare. I know terrorized enemies can attack back if you hit them, but I seem to recall hearing there was a limit on how often they can attack back. It was something like once every 3 or 5 seconds, even if they're getting hit a lot.
Does anyone have the exact info on this? It would help me to understand just how effective Fearsome Stare really is compared to other control powers. -
Actually, the fact that they seemed to be making it up as they went along suggests to me that none of those ideas could have been made interesting on their own. The show, like many recent shows, wasn't *about* anything. It had no rudder, which I suppose is ironic given its cliff hanger conclusion.
-
Quote:I think that's a little more literal in terms of testing for different capabilities than I was thinking.Spitballing here, but how about:
First mission: foes with very high resistance.
Second: foes with very high defense.
Third: foes that are mez immune.
Fourth: foes that have very high damage capability.
Fifth: foes with strong debuffs.
Make sure that each mission has varied damage types and resist/defense types. Sound like a decent start?
I was thinking more along the lines of testing against things that are "representative samples" of what actually exists in the game: a Malta mission (an "average" level 50 critter group), a Longbow mission (a group that tends to get increasingly dangerous as it becomes denser), a mission that tests against various AVs and Giant Monsters, stuff like that.
Of course, the limitations of the AE make certain things, like testing against Giant Monsters, not possible, but that's sort of the challenge of what you can engineer in the AE.
But there's also the question of testing sustainability. No matter how powerful the critter group, I can always find a way to defeat *one* spawn of it. Being able to sustain that, and defeat six in a row is harder. So given the limitations of the AE, I'm wondering what is the best way to throw the most variety, while balancing that against also testing sustainability. If I were to make an AE mission with Malta as the background critter type, then added a boatload of Carnies as patrols, I could make the Malta almost moot, because they would be very few of them scattered around to make room for the patrols. On the other hand, a mission full of Malta with a Boss at the end surrounded by Carnies and Carnie ambushes could validly test the player against both Malta and Carnies. If the Boss at the end was an AV, you could also test the ability to take down an AV.
A mission full of Cims would also be interesting because of all the stacking effects and defense debuffs, but also because that situation actually exists in the ITF. Actually, a "mini-ITF" was high on my list of things to implement, with Cims, Khelds, and a tough AV at the end.
I suppose in a sense I would be picking situations that represent some of the things you mentioned above, but the idea would be to look for the best examples of those things "in nature" so to speak, and then roughly replicate them, rather than creating situations that test for them explicitly.
The question is how do you take all these situations and first create them and second shove them into a single AE arc (or two, if it came to that). And which ones are the most interesting. I think a baseline is interesting with a critter type not too lethal. I think Carnies are interesting because they throw a rather interesting mix at players: end drain, debuffs, and a lot of psionic damage. I like the idea of testing against both single hard targets (AVs and the like) and large numbers.
Even with I22 having just launched, I will likely spend some significant amount of time on this, because I think its an interesting idea if nothing else. A challenge mission that doesn't directly try to kill you, but tries to see how well you do instead. -
-
That was a separate complaint about shards (that they could not be earned below level 50 content) but that would not affect the maximum rate at which they could be earned.
-
Quote:That's also unlikely. Statistically speaking, less than 33% of all players that ran about eight trials in a row at release would have been awarded at least two rare drops necessary to do that. On top of that, depending on when you started running iTrials if it was prior to down-conversion being added that would have required 14 common drops which you could only get by actually being lucky enough to get a common drop, or by creating them at 20 threads per. 280 threads in the eight trials would have also been a stretch. And eight trials in "under three hours" is itself a very generous estimate: there exists no possibility that the average trial execution rate is 22 minutes from league set up to completion.The first day I ran an iTrial (long before anything was done to speed anything up), I hit t3 in 2 slots in under 3 hours. That's fast.
Overall, I would estimate that less than 10% of all players that *tried* to do that would have succeeded at or near release, and much less than 5% of all players *actually* managed to see that earning rate. -
Quote:The degree to which the tasks can be optimized is an issue, but not the most important issue. The more important issue is to look at the thing that can't really be optimized: shards. Shards were unreasonable, and then somehow became not just reasonable but ideal. That's not a trivial change in perspective. The shards didn't change, so it was the definition of "reasonable" that did. But so long as everyone is asking for the "reasonable" but that word has a very slippery definition, there's no role for the numbers.Arcana,
While I appreciate what you are trying to say, the tests that I saw in beta were trying to purposely get results as fast as they could. As such, in three days of joining every trial they could when they were on, they had tier 3s of most everything. It would take longer to tier 4 everything.
HOWEVER, I get what you are saying. When the trials first come out, you need to run them over and over again to get good rewards, because as people try to figure them out, they take both longer, and run the risk of failure, requiring more attempts to get the same reward (read: farming). However, as the trials get run more and more, and people figure out the right tactics, you can "speed" them more easily, for slightly less XP, but higher rate of end rewards.
The DA content can't be sped up in a similar fashion, because of the time locks. Running things over and over again does not increase the rate of return as it would for running the trials. So there is definitely a difference there, in addition to the fact that rewards are lower in general.
As I said in my one of my earlier posts, using people who are intentionally trying to get max rewards in a minimum amount of time is not a good comparison point for anything. But then what is? It would be highly subjective based on the player, of course. Somebody who plays every night would expect possibly the same overall rate as somebody who only plays twice a week. Say, a month per tier 3. However, Person A would then need a drop rate that is lower than Person B would want.
So, can we create a fictional person, who plays what we might consider an "average" amount, and see what the drop rate is for him? Yes, though we'd have some argument on how much this person plays. But it could at least give us a baseline of how slow or fast the new solo path is compared to the trial runner.
What about three players, each with the same schedule. One is only a trial runner, and one only does the new DA content in a team, and one only does the new DA content solo. Each player plays for an hour a day, three days a week (I doubt that's a bad value to give these fictional players). What would the rate of gains be for these players?
I read all the testing threads and posts, and while I appreciate the work done in quantifying *what* is happening, where I had an issue is in the surrounding discussion about what *should* happen. Completely different people were using the exact same numbers to demonstrate that the rate was fine and not fine. And while that always happens to some degree, here it was actually the case that there was no way to refute either side, because what was defined to be "reasonable" was so vast in range it could describe anything.
As to the question of whether an objective frame of reference is possible, I believe it is possible. I don't believe everyone will agree its objective, but that's never actually ever one of my concerns. However, there's only two reasons to change it: its unbalanced, and its unpalatable. It hasn't existed long enough for us to know if its unbalanced, and if so to what degree. And if people will not agree on even what direction to head in much less how far, no change at the moment can address the issue of the system being unpalatable. This is almost certainly a case where regardless of what the system does now, we'll have to live with it for a while before the devs contemplate changing it for the simple reason that any change as a greater chance of succeeding on both fronts later rather than now.
Where I think you'll have the greatest problem achieving any sort of unanimity is in balancing the opportunity limits of running trials against the always-available nature of the solo path, even with the time gates. On some trials, trials are run continuously and quickly and with a high success rate. On other servers they are run less frequently and/or with lower success rates. Balancing for the case where the player can run a trial whenever they wish with a high success rate is itself potentially a skewed assumption. These things are subjective value judgments, but play a critical role in balancing the solo path against the trial path. -
Quote:The notion that the unit of combat is more or less the mission forms one of the foundations for the giant scrapper mitigation analysis I did as well.I got into some balance and 'feel of game' discussions with a friend not too long ago about City versus other MMOs, and I noted that in a lot of MMOs, power design is around a unit of One Encounter.
I then argued that in City, because of the way health, regen, and recharge works, we're really dealing with a unit of One Mission, because of the aggregate effects.
We have in-combat regeneration and recovery, so in City of Heroes combat is "sustainable." In most MMOs, combat is not sustainable as a practical matter in most cases, no matter what your team looks like or how you build or what your gear does. In most MMOs, you fight, you recovery, and then you fight again.
This principle caused a lot of problems at the time, with people saying the "immortality line" calculations were meaningless unless you were a Highlander and never logged out.
One of the things that stands out in my memory about the book Moneyball was how very small effects that amounted statistically to small fractions of a run per game could over a season mean the difference between the top of the division and the bottom. Things no one could ever see or notice when watching a game determined almost *everything* when it came to performance for an entire season. No one can "witness" the difference between a .300 hitter and a .220 hitter. No one can see the difference between someone that gets on base 45% of the time and someone who gets on base 37% of the time. We can more easily see things like who hits the most home runs. And those things don't actually count for much in the long run. -
Something about the notion that racial slurs related to Star Wars are among the most actionable CoH offenses is amusing to me. I'm picturing a list being created in a committee, and a group of people deciding that Star Wars references are bad, racial slurs are bad, but racial slurs related to Star Wars references are definitely in the top three, just above Rickrolled /jranger quote pyramids and just below North Korean Nazi Dinosaurs.
-
Quote:Orders of magnitude slower? Not really.I agree that it should always be lower. But they could double the Dark Astoria rate, and it would still be orders of magnitude slower than the trials.
I think the real problem is that perception has completely nullified quantitative analysis when it comes to the incarnate system. The numbers don't mean anything anymore when they're basically being selected for their weapons-grade value only.
Here's what I see. The shard path used to be described as "unbearably slow" and calculations were produced to demonstrate that. Now its being hailed as the model to follow. The iTrials were described as requiring ludicrous amounts of farming to make progress in, and now that ludicrous farming is apparently taking a single afternoon to complete. The shard system is basically identical and the iTrial rewards have only increased a little: not enough to justify these polar opposite swings in perception.
I said in beta that I thought the solo path, based on measurements, was a bit low. I even gave a quantitative estimate for where I thought it was reasonable to lie relative to the iTrial path, which was somewhere in the general vicinity of the fastest possible solo path being within a binary order of magnitude of half the average iTrial rate. However, when I started to hear people say that, ok, its perfectly easy to tier 4 everything in a weekend, I decided that any quantitative analysis of the situation was doomed to irrelevancy.
People want what they want, and sometimes that can be tempered by the requirements of game balance logic. This is not one of those times. That doesn't mean I think the people who want it faster or think its too fast are wrong. It just means I think there's no objective foundation to decide either way: its just a case of people wanting what they want. -
Quote:All critters LT and higher have some intrinsic resistance to tohit debuffs based on their rank: Lts have 10% resistance to tohit debuffs for example, while bosses and elite bosses have 20% and archvillains have 30%. This is intrinsic to the rank definition and has nothing to do with any powers the critter might have.Anything Elite Boss and up, other blast sets lose effectiveness of their secondary effects. The -Res from Sonic will do less on an AV than it will a minion, lieutenant, or boss. The same goes for the -Tohit of Dark, -Def of Rad, Knockback of Energy, -Recharge of Psi, -SPD and -Recharge of Ice.
Elite Bosses specifically have 25% resistance to endurance drain and Archvillains have 50% resistance to endurance drain intrinsically - meaning all EBs and all AVs. Neither have intrinsic resistance to recovery debuffs. Again, this is intrinsic to the rank definitions and has nothing to do with any power the critter might have.
None of the standard critter ranks have intrinsic resistance to defense debuffs, knockback, or recharge.
Its literally impossible for anyone to have special resistance to resistance debuffs: the only way to have that is to have resistance to the damage type. 25% smashing resistance will also automatically be 25% resistance to smashing resistance debuffs. There is no other way to resist resistance debuffs.
AVs *usually* get special resistances in their special resistance power, but that's at the will of the critter designer. By default, AVs tend to resist endurance drain, recovery debuffs, defense debuff, regeneration debuffs, recharge debuffs, knock effects, and perception debuffs, and this resistance scales upward with level.
And to reiterate, AVs do not have special resistances against resistance debuffs. To put it another way, anything that was made immune to resistance debuffs would be immune to damage also. The game engine makes no distinction between the two. -
Strangely, I've been thinking about this from the opposite direction. Specifically, creating a challenge arc that wasn't designed to throw everything possible in a single mission, but the reverse: have a set of five missions that were designed to represent what you normally see in the game, but at its maximum possible difficulty. Given the limits of the AE, I've been wondering what the best possible way of doing that is, given the five mission limit of an AE arc.
-
Quote:It is shall we say less than lady-like in a skirt.When performing a simple vertical leap (not forward), a female character now uses the male animation. On the way up, their legs are slightly spread, and their shoulders are hunched. On the way down, they have their legs bent, with the right leg slightly turned outward.
It looks really bad. -
Quote:I believe that the ultimate reason in the case of Defenders is that no defender lacks control, strong debuff, defense, and healing simultaneously. Most people tend to look for dramatic things: if Defenders are not *immune* to something, then they must be just as vulnerable. But most of the time its not the fight that kills us, its the fight before that kills us. It puts us into a situation where we decide not to wait for rest, not to pop our last set of inspirations because we're saving them for a rainy day that never comes, it forces us to start the fight at something other than 100%.To give an admittedly extreme example, my Dark Miasma characters can often wait out a mez without resorting to a Break Free because their own mezzes and/or debuffs have crippled their foes' offense, Tar Patch makes it hard for foes to reach my character, or because foes become engaged with my pet (or all three).
All it takes is one self heal, one defense power, one strong debuff, to swing the odds enough to be noticeable, because effects are cumulative. Here I think experience can actually act *against* judgment: people who play the game longer get numbed to how much of a difference unslotted dispersion bubble can be, for example. -
No that's not what I'm saying. But is that a topic you'd like to discuss?
-
Quote:I think its a lot of reasons that interconnect, but I think the biggest problem with the Blaster archetype definition is that it doesn't actually specifically proscribe having anything. Everyone knows what Blasters are not allowed to have: mez protection, significant defense or resistance, etc. But no one - not even the devs - can say what they are supposed to have.One theory (which I buy into) for why Blasters have suffered real, measured performance issues in the past is that they basically have an AT definition that calls for them to have team support to reach their peak performance. If you really need that support, then you are at risk if you are solo or if your team does not provide adequate support (due to team composition, skill, etc.). No other AT labors under a similar role definition that provides this limitation of scope - while they may need support on a powerset-by-powerset basis, they are allowed to have powersets they let them operate fine without support.
There is no other archetype for which this is true: every archetype has obvious and very specific things they are required to have, and proof a powerset combination did not have any of the items on the list would be proof that powerset combination was broken and required a fix. There is no such list that can be produced for Blasters.
Unless you say "damage" without qualification. But everyone is required to have damage. Saying Blasters are required to have damage says nothing at all. How would you know they had enough damage? There's no way to tell, except to say that just like everyone else they have enough to solo. -
Quote:Question: Is the Blaster situation unique? In other words, even if its true that every Blaster powerset combination underperformed solo and teamed, does there exist any other archetype for which it was true that all powersets underperformed either solo or teamed, such as Defenders?well defenders underperformed soloing and the solution was a damage buff that only works solo or in small teams
Answer: no.
I specifically mentioned that back when Defiance 2.0 was being adjusted as a fact I was allowed to mention that the devs confirmed.
I don't believe Defenders underperformed solo. I think the solo damage buff was a QoL buff, not a quantitative balance buff. When the devs were asked about Defender performance in the past, they've often said that Defenders were the best balanced archetype. At the time, the precise meaning of that statement was not precisely known, but in retrospect what they were almost certainly saying was that across all of its powerset combinations, Defenders come the closest to being at or near the average performance of all players. They said that repeatedly, which means that was true over a very long period of time. At least from I5 to I11 and there's no reason to believe that changed markedly. -
Quote:Are you saying you reject reality because you cannot come up with a theory to explain it?If two ATs are mezzed, all other things being equal, how is it possible to be more harmful to one versus another?
According to Castle, Blasters spent more time mezzed than average. That's a difficult thing for the servers to miscount. According to Castle, Blasters died more often because of mez. That's also difficult to miscount. Blasters overall earned less per unit time than all other archetypes in almost any situation. That's *also* difficult to miscount. So either NCSoft computers can't count, or all those represent reality. *Why* it happens is difficult to explain with certainty, although its not that there aren't theories. But why it happens is a separate issue from the fact *that* it happens. Either a perspective of how the archetypes perform include that fact, or that perspective doesn't match reality.
My own personal theory is that most people, myself included, tend to think about things in terms of a single fight. If two archetypes have indistinguishable performance after a single fight, then of course that should extend to a series of them. But that doesn't factor in the fact that performance is both statistical and coarsely grained. One more or less attack can have a big impact, and small changes can have cumulative impacts that are difficult to see in the short term. By my estimate, a difference in the probability of dying of two tenths of one percent per engagement would be essentially impossible to notice by an observer but could alter the performance of the two characters by over 5% over the long run. A difference in dying percentage of a single percent is actually huge, because it underscores underlying things that likely have cumulative benefits or deficits. But even 1% is impossible to detect by the average observer. -
Quote:That would address underperformance in teams to some degree. It would not address underperformance solo. Anything done to blasters should attempt to address their issues everywhere, and unlike any other archetype its been proven that they actually have issues everywhere. Everyone's common sense about Blaster performance is outright wrong. Even in the under level 30 game where people think, ok, blasters might underperform in the late game but here they obviously work better, Blasters have been datamined in the past to underperform. A solo blaster at level 20 is more likely to underperform a solo defender at level 20 than overperform it, or at least this was true prior to D2.0. That's the facts. *Why* that's the case is not entirely understood precisely, although there are lots of potential reasons. But that it happens is no longer debatable, at least to me, without strong evidence to the contrary.I think the reason blasters spend more time mezzed and defeated in teams is because they spend more time attacking their foes without preventing a counter attack, simple as that.
controllers/dominators can spend their time mezzing foes
defenders can be buffing allies or healing
blasters just attack. Which means if they are doing anything useful in a fight they are running the risk of getting aggro back.
I had an ice/emp controller. In a team he dropped ice slick, turned on arctic air, and then did block of ice or heal other. He contributed massively but every foe he attacked was held. He could draw aggro but it was rare.
A possible fix for this would be reduced aggro for blaster damage. Say they only get half aggro for the damage they do. They can still draw aggro, but it would be easier to unleash their attacks without getting attacked.
I believe its better to find a solution that helps solo blasters that happens to work in teams than the other way around, because its more likely that something that works solo will also work or can be made to work in teams than the reverse.
Also, if the problem was that Defenders drew less aggro, it would be difficult to explain why their advantage doesn't disappear solo. But it does not.
I believe that the best test of any theory of Blaster performance is pre-D2.0 Blaster performance. Any theory claiming to reasonably describe Blaster performance today should be able to predict the horrendous performance of Blasters in the past. If the theory doesn't predict that, that suggests its missing something. -
Quote:It has been more harmful by a significant margin to Blasters than other archetypes including Controllers and Defenders in the past for most players - probably going back to launch. There must exist gameplay options for Defenders and Controllers that allow them to avoid mez and avoid the deleterious effects of mez that don't exist for Blasters. This suggests your experience is not the norm. Balancing the game to your experience would thus fail to address the issues faced by the vast majority of players.In my experience, I don't find the state of "being mezzed" any more/less harmful to my characters survival whether this happens as a blaster, controller, corrupter or defender. As a mezzed MM, likely less harmful than all others as my pets remain active even while I am mezzed. So it's situational and really depends on aggro level immediately preceding the mezz event in that case.
And it's been like this for me since launch. -
Quote:I didn't think this required that much explanation. But ok: if someone says "I would like a costume option that looks specifically like X" and I say "hmm, there might be problems with something that does that specifically" then even ignoring the tongue-in-cheek aspect of my post, replying with "some people want the comic book style" seems to have nothing whatsoever to do with my post: its a reply without a reference. I don't know if that is intended to be an agreement, a contradiction, a tangential reference, a clarification, a refutation, an amplification, or a nullification.Really?
Because I didn't get it either.
The "wardrobe malfunction", I got. That was mildly amusing.
But the spinach/iron thing?...
The only correlation to Arcana's second "joke" that I can think of is "Popeye".
And frankly, that IS a stretch, and a pretty poor one at that; certainly not deserving of knee-slapping in my book.
meh
.
Is that supposed to mean "yes, it is a wardrobe malfunction; all comic book style costumes are wardrobe malfunctions and so that's exactly what I want." Or "no, its not a wardrobe malfunction; the comic books style specifically mandates strappy but otherwise ill-fitting tops as everyone knows." Or "regardless of whether its a wardrobe malfunction or not is irrelevant: the need to express the comic book style requires strappy but loose fitting tops and the consequences are not as important as the representation." Or "I recognize that its potentially a wardrobe malfunction, but I want to express the fact that many people are compelled to ask for things that result in that situation, because they are motivated to ask for things they believe to best represent the comic book style of which strappy but loose-fitting tops are a significant subset thereof."
There's so little connection between the two statements that the reader is required to invent one completely out of the blue that is in no way indicated by either statement. When the reader has to invent a connection between a statement and its reply, and the odds of being correct are not good, that's the very definition of a non-sequitor statement.
And non-sequitor statements are usually an indication that the person replying either doesn't understand the original statement, or does understand the original statement but doesn't understand most people will not make the same connections they do to it, so they don't think there's any need to explain. My making another was intended to illustrate the point: everyone either got it, or didn't get it. Those who didn't get it are struggling to figure out what I meant because they have no reference for what I meant, and probably trying to invent meanings instead of recognizing the analogous reference of being non-sequitor.
Which is the point.
It wasn't really "a joke" because it wasn't intended to be funny. It was intended to get people to ask, if my reply appeared to have nothing to do with what it replied to, perhaps that was because what it replied to had nothing to do with the post it replied to either.
And I still do not know how asking for something that is *neither* painted on or stretched across is a request for "the comic book style." I didn't realize there was such a definitive style, but even if there is one then the "loose strappy top look" has escaped my attention for a very long time.